You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by zj...@apache.org on 2015/03/03 20:31:39 UTC

[01/43] hadoop git commit: HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in hdfs-default.xml. Contributed by Kai Sasaki.

Repository: hadoop
Updated Branches:
  refs/heads/YARN-2928 bf08f7f0e -> d3ff7f06c


HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in hdfs-default.xml. Contributed by Kai Sasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8719cdd4
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8719cdd4
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8719cdd4

Branch: refs/heads/YARN-2928
Commit: 8719cdd4f68abb91bf9459bca2a5467dafb6b5ae
Parents: 01a1621
Author: Akira Ajisaka <aa...@apache.org>
Authored: Fri Feb 27 12:17:34 2015 -0800
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Fri Feb 27 12:17:34 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt              |  3 +++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml      | 11 +++++++++++
 2 files changed, 14 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8719cdd4/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b2422d6..b4b0087 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -685,6 +685,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-7308. Change the packet chunk size computation in DFSOutputStream in
     order to enforce packet size <= 64kB.  (Takuya Fukudome via szetszwo)
 
+    HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in
+    hdfs-default.xml. (Kai Sasaki via aajisaka)
+
   OPTIMIZATIONS
 
     HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8719cdd4/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 85d2273..66fe86c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -145,6 +145,17 @@
 </property>
 
 <property>
+  <name>dfs.namenode.heartbeat.recheck-interval</name>
+  <value>300000</value>
+  <description>
+    This time decides the interval to check for expired datanodes.
+    With this value and dfs.heartbeat.interval, the interval of
+    deciding the datanode is stale or not is also calculated.
+    The unit of this configuration is millisecond.
+  </description>
+</property>
+
+<property>
   <name>dfs.http.policy</name>
   <value>HTTP_ONLY</value>
   <description>Decide if HTTPS(SSL) is supported on HDFS


[13/43] hadoop git commit: YARN-3199. Fair Scheduler documentation improvements (Rohit Agarwal via aw)

Posted by zj...@apache.org.
YARN-3199. Fair Scheduler documentation improvements (Rohit Agarwal via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/8472d729
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/8472d729
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/8472d729

Branch: refs/heads/YARN-2928
Commit: 8472d729974ea3ccf9fff5ce4f5309aa8e43a49e
Parents: 2e44b75
Author: Allen Wittenauer <aw...@apache.org>
Authored: Sat Feb 28 11:36:15 2015 -0800
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Sat Feb 28 11:36:15 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                                 | 5 ++++-
 .../hadoop-yarn-site/src/site/markdown/FairScheduler.md         | 2 ++
 2 files changed, 6 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/8472d729/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 02b1831..cef1758 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -1,6 +1,6 @@
 Hadoop YARN Change Log
 
-Trunk - Unreleased 
+Trunk - Unreleased
 
   INCOMPATIBLE CHANGES
 
@@ -23,6 +23,9 @@ Trunk - Unreleased
     YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty
     via aw)
 
+    YARN-3199. Fair Scheduler documentation improvements (Rohit Agarwal via
+    aw)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/8472d729/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
index 1812a44..a58b3d3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
@@ -85,6 +85,8 @@ Customizing the Fair Scheduler typically involves altering two files. First, sch
 | `yarn.scheduler.fair.locality.threshold.rack` | For applications that request containers on particular racks, the number of scheduling opportunities since the last container assignment to wait before accepting a placement on another rack. Expressed as a float between 0 and 1, which, as a fraction of the cluster size, is the number of scheduling opportunities to pass up. The default value of -1.0 means don't pass up any scheduling opportunities. |
 | `yarn.scheduler.fair.allow-undeclared-pools` | If this is true, new queues can be created at application submission time, whether because they are specified as the application's queue by the submitter or because they are placed there by the user-as-default-queue property. If this is false, any time an app would be placed in a queue that is not specified in the allocations file, it is placed in the "default" queue instead. Defaults to true. If a queue placement policy is given in the allocations file, this property is ignored. |
 | `yarn.scheduler.fair.update-interval-ms` | The interval at which to lock the scheduler and recalculate fair shares, recalculate demand, and check whether anything is due for preemption. Defaults to 500 ms. |
+| `yarn.scheduler.increment-allocation-mb` | The fairscheduler grants memory in increments of this value. If you submit a task with resource request that is not a multiple of increment-allocation-mb, the request will be rounded up to the nearest increment. Defaults to 1024 MB. |
+| `yarn.scheduler.increment-allocation-vcores` | The fairscheduler grants vcores in increments of this value. If you submit a task with resource request that is not a multiple of increment-allocation-vcores, the request will be rounded up to the nearest increment. Defaults to 1. |
 
 ###Allocation file format
 


[29/43] hadoop git commit: YARN-3265. Fixed a deadlock in CapacityScheduler by always passing a queue's available resource-limit from the parent queue. Contributed by Wangda Tan.

Posted by zj...@apache.org.
YARN-3265. Fixed a deadlock in CapacityScheduler by always passing a queue's available resource-limit from the parent queue. Contributed by Wangda Tan.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/14dd647c
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/14dd647c
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/14dd647c

Branch: refs/heads/YARN-2928
Commit: 14dd647c556016d351f425ee956ccf800ccb9ce2
Parents: abac6eb
Author: Vinod Kumar Vavilapalli <vi...@apache.org>
Authored: Mon Mar 2 17:52:47 2015 -0800
Committer: Vinod Kumar Vavilapalli <vi...@apache.org>
Committed: Mon Mar 2 17:52:47 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                 |   3 +
 .../scheduler/ResourceLimits.java               |  40 +++
 .../scheduler/ResourceUsage.java                |  61 ++---
 .../scheduler/capacity/AbstractCSQueue.java     |  24 +-
 .../scheduler/capacity/CSQueue.java             |  11 +-
 .../scheduler/capacity/CSQueueUtils.java        |  48 ----
 .../capacity/CapacityHeadroomProvider.java      |  16 +-
 .../scheduler/capacity/CapacityScheduler.java   |  30 ++-
 .../scheduler/capacity/LeafQueue.java           | 131 +++++-----
 .../scheduler/capacity/ParentQueue.java         |  53 +++-
 .../yarn/server/resourcemanager/MockAM.java     |  11 +-
 .../scheduler/TestResourceUsage.java            |   2 +-
 .../capacity/TestApplicationLimits.java         |  32 +--
 .../scheduler/capacity/TestCSQueueUtils.java    | 250 -------------------
 .../capacity/TestCapacityScheduler.java         |  85 ++++++-
 .../scheduler/capacity/TestChildQueueOrder.java |  36 ++-
 .../scheduler/capacity/TestLeafQueue.java       | 221 ++++++++++------
 .../scheduler/capacity/TestParentQueue.java     | 106 ++++----
 .../scheduler/capacity/TestReservations.java    | 100 +++++---
 19 files changed, 646 insertions(+), 614 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index d07aa26..0850f0b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -686,6 +686,9 @@ Release 2.7.0 - UNRELEASED
     YARN-3270. Fix node label expression not getting set in 
     ApplicationSubmissionContext (Rohit Agarwal via wangda)
 
+    YARN-3265. Fixed a deadlock in CapacityScheduler by always passing a queue's
+    available resource-limit from the parent queue. (Wangda Tan via vinodkv)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
new file mode 100644
index 0000000..12333e8
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceLimits.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.resourcemanager.scheduler;
+
+import org.apache.hadoop.yarn.api.records.Resource;
+
+/**
+ * Resource limits for queues/applications, this means max overall (please note
+ * that, it's not "extra") resource you can get.
+ */
+public class ResourceLimits {
+  public ResourceLimits(Resource limit) {
+    this.limit = limit;
+  }
+  
+  volatile Resource limit;
+  public Resource getLimit() {
+    return limit;
+  }
+  
+  public void setLimit(Resource limit) {
+    this.limit = limit;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
index c651878..de44bbe 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/ResourceUsage.java
@@ -50,11 +50,12 @@ public class ResourceUsage {
     writeLock = lock.writeLock();
 
     usages = new HashMap<String, UsageByLabel>();
+    usages.put(NL, new UsageByLabel(NL));
   }
 
   // Usage enum here to make implement cleaner
   private enum ResourceType {
-    USED(0), PENDING(1), AMUSED(2), RESERVED(3), HEADROOM(4);
+    USED(0), PENDING(1), AMUSED(2), RESERVED(3);
 
     private int idx;
 
@@ -71,7 +72,18 @@ public class ResourceUsage {
       resArr = new Resource[ResourceType.values().length];
       for (int i = 0; i < resArr.length; i++) {
         resArr[i] = Resource.newInstance(0, 0);
-      }
+      };
+    }
+    
+    @Override
+    public String toString() {
+      StringBuilder sb = new StringBuilder();
+      sb.append("{used=" + resArr[0] + "%, ");
+      sb.append("pending=" + resArr[1] + "%, ");
+      sb.append("am_used=" + resArr[2] + "%, ");
+      sb.append("reserved=" + resArr[3] + "%, ");
+      sb.append("headroom=" + resArr[4] + "%}");
+      return sb.toString();
     }
   }
 
@@ -181,41 +193,6 @@ public class ResourceUsage {
   }
 
   /*
-   * Headroom
-   */
-  public Resource getHeadroom() {
-    return getHeadroom(NL);
-  }
-
-  public Resource getHeadroom(String label) {
-    return _get(label, ResourceType.HEADROOM);
-  }
-
-  public void incHeadroom(String label, Resource res) {
-    _inc(label, ResourceType.HEADROOM, res);
-  }
-
-  public void incHeadroom(Resource res) {
-    incHeadroom(NL, res);
-  }
-
-  public void decHeadroom(Resource res) {
-    decHeadroom(NL, res);
-  }
-
-  public void decHeadroom(String label, Resource res) {
-    _dec(label, ResourceType.HEADROOM, res);
-  }
-
-  public void setHeadroom(Resource res) {
-    setHeadroom(NL, res);
-  }
-
-  public void setHeadroom(String label, Resource res) {
-    _set(label, ResourceType.HEADROOM, res);
-  }
-
-  /*
    * AM-Used
    */
   public Resource getAMUsed() {
@@ -309,4 +286,14 @@ public class ResourceUsage {
       writeLock.unlock();
     }
   }
+  
+  @Override
+  public String toString() {
+    try {
+      readLock.lock();
+      return usages.toString();
+    } finally {
+      readLock.unlock();
+    }
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
index eb7218b..d800709 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
@@ -40,9 +40,11 @@ import org.apache.hadoop.yarn.security.PrivilegedEntity.EntityType;
 import org.apache.hadoop.yarn.security.YarnAuthorizationProvider;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
 import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
+import org.apache.hadoop.yarn.util.resource.Resources;
 
 import com.google.common.collect.Sets;
 
@@ -52,7 +54,7 @@ public abstract class AbstractCSQueue implements CSQueue {
   final String queueName;
   volatile int numContainers;
   
-  Resource minimumAllocation;
+  final Resource minimumAllocation;
   Resource maximumAllocation;
   QueueState state;
   final QueueMetrics metrics;
@@ -94,6 +96,7 @@ public abstract class AbstractCSQueue implements CSQueue {
             cs.getConf());
 
     this.csContext = cs;
+    this.minimumAllocation = csContext.getMinimumResourceCapability();
     
     // initialize ResourceUsage
     queueUsage = new ResourceUsage();
@@ -248,7 +251,6 @@ public abstract class AbstractCSQueue implements CSQueue {
     // After we setup labels, we can setup capacities
     setupConfigurableCapacities();
     
-    this.minimumAllocation = csContext.getMinimumResourceCapability();
     this.maximumAllocation =
         csContext.getConfiguration().getMaximumAllocationPerQueue(
             getQueuePath());
@@ -403,4 +405,22 @@ public abstract class AbstractCSQueue implements CSQueue {
     return csConf.getPreemptionDisabled(q.getQueuePath(),
                                         parentQ.getPreemptionDisabled());
   }
+  
+  protected Resource getCurrentResourceLimit(Resource clusterResource,
+      ResourceLimits currentResourceLimits) {
+    /*
+     * Queue's max available resource = min(my.max, my.limit)
+     * my.limit is set by my parent, considered used resource of my siblings
+     */
+    Resource queueMaxResource =
+        Resources.multiplyAndNormalizeDown(resourceCalculator, clusterResource,
+            queueCapacities.getAbsoluteMaximumCapacity(), minimumAllocation);
+    Resource queueCurrentResourceLimit =
+        Resources.min(resourceCalculator, clusterResource, queueMaxResource,
+            currentResourceLimits.getLimit());
+    queueCurrentResourceLimit =
+        Resources.roundDown(resourceCalculator, queueCurrentResourceLimit,
+            minimumAllocation);
+    return queueCurrentResourceLimit;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
index 5cf38c1..0a60acc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueue.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEventType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
@@ -189,10 +190,12 @@ extends org.apache.hadoop.yarn.server.resourcemanager.scheduler.Queue {
    * @param clusterResource the resource of the cluster.
    * @param node node on which resources are available
    * @param needToUnreserve assign container only if it can unreserve one first
+   * @param resourceLimits how much overall resource of this queue can use. 
    * @return the assignment
    */
-  public CSAssignment assignContainers(
-      Resource clusterResource, FiCaSchedulerNode node, boolean needToUnreserve);
+  public CSAssignment assignContainers(Resource clusterResource,
+      FiCaSchedulerNode node, boolean needToUnreserve,
+      ResourceLimits resourceLimits);
   
   /**
    * A container assigned to the queue has completed.
@@ -231,8 +234,10 @@ extends org.apache.hadoop.yarn.server.resourcemanager.scheduler.Queue {
    /**
    * Update the cluster resource for queues as we add/remove nodes
    * @param clusterResource the current cluster resource
+   * @param resourceLimits the current ResourceLimits
    */
-  public void updateClusterResource(Resource clusterResource);
+  public void updateClusterResource(Resource clusterResource,
+      ResourceLimits resourceLimits);
   
   /**
    * Get the {@link ActiveUsersManager} for the queue.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
index 865b0b4..1921195 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CSQueueUtils.java
@@ -225,52 +225,4 @@ class CSQueueUtils {
             )
         );
    }
-
-   public static float getAbsoluteMaxAvailCapacity(
-      ResourceCalculator resourceCalculator, Resource clusterResource, CSQueue queue) {
-      CSQueue parent = queue.getParent();
-      if (parent == null) {
-        return queue.getAbsoluteMaximumCapacity();
-      }
-
-      //Get my parent's max avail, needed to determine my own
-      float parentMaxAvail = getAbsoluteMaxAvailCapacity(
-        resourceCalculator, clusterResource, parent);
-      //...and as a resource
-      Resource parentResource = Resources.multiply(clusterResource, parentMaxAvail);
-
-      //check for no resources parent before dividing, if so, max avail is none
-      if (Resources.isInvalidDivisor(resourceCalculator, parentResource)) {
-        return 0.0f;
-      }
-      //sibling used is parent used - my used...
-      float siblingUsedCapacity = Resources.ratio(
-                 resourceCalculator,
-                 Resources.subtract(parent.getUsedResources(), queue.getUsedResources()),
-                 parentResource);
-      //my max avail is the lesser of my max capacity and what is unused from my parent
-      //by my siblings (if they are beyond their base capacity)
-      float maxAvail = Math.min(
-        queue.getMaximumCapacity(),
-        1.0f - siblingUsedCapacity);
-      //and, mutiply by parent to get absolute (cluster relative) value
-      float absoluteMaxAvail = maxAvail * parentMaxAvail;
-
-      if (LOG.isDebugEnabled()) {
-        LOG.debug("qpath " + queue.getQueuePath());
-        LOG.debug("parentMaxAvail " + parentMaxAvail);
-        LOG.debug("siblingUsedCapacity " + siblingUsedCapacity);
-        LOG.debug("getAbsoluteMaximumCapacity " + queue.getAbsoluteMaximumCapacity());
-        LOG.debug("maxAvail " + maxAvail);
-        LOG.debug("absoluteMaxAvail " + absoluteMaxAvail);
-      }
-
-      if (absoluteMaxAvail < 0.0f) {
-        absoluteMaxAvail = 0.0f;
-      } else if (absoluteMaxAvail > 1.0f) {
-        absoluteMaxAvail = 1.0f;
-      }
-
-      return absoluteMaxAvail;
-   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityHeadroomProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityHeadroomProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityHeadroomProvider.java
index f79d195..c6524c6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityHeadroomProvider.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityHeadroomProvider.java
@@ -26,32 +26,32 @@ public class CapacityHeadroomProvider {
   LeafQueue queue;
   FiCaSchedulerApp application;
   Resource required;
-  LeafQueue.QueueHeadroomInfo queueHeadroomInfo;
+  LeafQueue.QueueResourceLimitsInfo queueResourceLimitsInfo;
   
   public CapacityHeadroomProvider(
     LeafQueue.User user,
     LeafQueue queue,
     FiCaSchedulerApp application,
     Resource required,
-    LeafQueue.QueueHeadroomInfo queueHeadroomInfo) {
+    LeafQueue.QueueResourceLimitsInfo queueResourceLimitsInfo) {
     
     this.user = user;
     this.queue = queue;
     this.application = application;
     this.required = required;
-    this.queueHeadroomInfo = queueHeadroomInfo;
+    this.queueResourceLimitsInfo = queueResourceLimitsInfo;
     
   }
   
   public Resource getHeadroom() {
     
-    Resource queueMaxCap;
+    Resource queueCurrentLimit;
     Resource clusterResource;
-    synchronized (queueHeadroomInfo) {
-      queueMaxCap = queueHeadroomInfo.getQueueMaxCap();
-      clusterResource = queueHeadroomInfo.getClusterResource();
+    synchronized (queueResourceLimitsInfo) {
+      queueCurrentLimit = queueResourceLimitsInfo.getQueueCurrentLimit();
+      clusterResource = queueResourceLimitsInfo.getClusterResource();
     }
-    Resource headroom = queue.getHeadroom(user, queueMaxCap, 
+    Resource headroom = queue.getHeadroom(user, queueCurrentLimit, 
       clusterResource, application, required);
     
     // Corner case to deal with applications being slightly over-limit

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
index 6b9d846..28ce264 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
@@ -25,6 +25,7 @@ import java.util.Collection;
 import java.util.Comparator;
 import java.util.EnumSet;
 import java.util.HashMap;
+import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.Map.Entry;
@@ -33,7 +34,6 @@ import java.util.Set;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
-import java.util.HashSet;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -56,6 +56,7 @@ import org.apache.hadoop.yarn.api.records.NodeId;
 import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.QueueInfo;
 import org.apache.hadoop.yarn.api.records.QueueUserACLInfo;
+import org.apache.hadoop.yarn.api.records.ReservationId;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceOption;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
@@ -84,12 +85,16 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmnode.UpdatedContainerInfo
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Allocation;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.PreemptableResourceScheduler;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Queue;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueNotFoundException;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplication;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerDynamicEditException;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.QueueMapping;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.QueueMapping.MappingType;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.QueueEntitlement;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAddedSchedulerEvent;
@@ -112,11 +117,6 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
 
-import org.apache.hadoop.yarn.api.records.ReservationId;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.QueueEntitlement;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerDynamicEditException;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.Queue;
-
 @LimitedPrivate("yarn")
 @Evolving
 @SuppressWarnings("unchecked")
@@ -499,7 +499,8 @@ public class CapacityScheduler extends
     initializeQueueMappings();
 
     // Re-calculate headroom for active applications
-    root.updateClusterResource(clusterResource);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
 
     labelManager.reinitializeQueueLabels(getQueueToLabels());
     setQueueAcls(authorizer, queues);
@@ -990,7 +991,8 @@ public class CapacityScheduler extends
   private synchronized void updateNodeAndQueueResource(RMNode nm, 
       ResourceOption resourceOption) {
     updateNodeResource(nm, resourceOption);
-    root.updateClusterResource(clusterResource);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
   }
   
   /**
@@ -1060,7 +1062,8 @@ public class CapacityScheduler extends
       
       LeafQueue queue = ((LeafQueue)reservedApplication.getQueue());
       CSAssignment assignment = queue.assignContainers(clusterResource, node,
-          false);
+          false, new ResourceLimits(
+              clusterResource));
       
       RMContainer excessReservation = assignment.getExcessReservation();
       if (excessReservation != null) {
@@ -1084,7 +1087,8 @@ public class CapacityScheduler extends
           LOG.debug("Trying to schedule on node: " + node.getNodeName() +
               ", available: " + node.getAvailableResource());
         }
-        root.assignContainers(clusterResource, node, false);
+        root.assignContainers(clusterResource, node, false, new ResourceLimits(
+            clusterResource));
       }
     } else {
       LOG.info("Skipping scheduling since node " + node.getNodeID() + 
@@ -1205,7 +1209,8 @@ public class CapacityScheduler extends
         usePortForNodeName, nodeManager.getNodeLabels());
     this.nodes.put(nodeManager.getNodeID(), schedulerNode);
     Resources.addTo(clusterResource, nodeManager.getTotalCapability());
-    root.updateClusterResource(clusterResource);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
     int numNodes = numNodeManagers.incrementAndGet();
     updateMaximumAllocation(schedulerNode, true);
     
@@ -1234,7 +1239,8 @@ public class CapacityScheduler extends
       return;
     }
     Resources.subtractFrom(clusterResource, node.getRMNode().getTotalCapability());
-    root.updateClusterResource(clusterResource);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
     int numNodes = numNodeManagers.decrementAndGet();
 
     if (scheduleAsynchronously && numNodes == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
index 38d4712..3910ac8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
@@ -62,6 +62,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEven
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerState;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceUsage;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerAppUtils;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
@@ -115,7 +116,10 @@ public class LeafQueue extends AbstractCSQueue {
   // absolute capacity as a resource (based on cluster resource)
   private Resource absoluteCapacityResource = Resources.none();
   
-  private final QueueHeadroomInfo queueHeadroomInfo = new QueueHeadroomInfo();
+  private final QueueResourceLimitsInfo queueResourceLimitsInfo =
+      new QueueResourceLimitsInfo();
+  
+  private volatile ResourceLimits currentResourceLimits = null;
   
   public LeafQueue(CapacitySchedulerContext cs, 
       String queueName, CSQueue parent, CSQueue old) throws IOException {
@@ -145,13 +149,14 @@ public class LeafQueue extends AbstractCSQueue {
     this.lastClusterResource = clusterResource;
     updateAbsoluteCapacityResource(clusterResource);
     
+    this.currentResourceLimits = new ResourceLimits(clusterResource);
+    
     // Initialize headroom info, also used for calculating application 
     // master resource limits.  Since this happens during queue initialization
     // and all queues may not be realized yet, we'll use (optimistic) 
     // absoluteMaxCapacity (it will be replaced with the more accurate 
     // absoluteMaxAvailCapacity during headroom/userlimit/allocation events)
-    updateHeadroomInfo(clusterResource,
-        queueCapacities.getAbsoluteMaximumCapacity());
+    computeQueueCurrentLimitAndSetHeadroomInfo(clusterResource);
 
     CapacitySchedulerConfiguration conf = csContext.getConfiguration();
     userLimit = conf.getUserLimit(getQueuePath());
@@ -544,12 +549,12 @@ public class LeafQueue extends AbstractCSQueue {
       * become busy.
       *
       */
-     Resource queueMaxCap;
-     synchronized (queueHeadroomInfo) {
-       queueMaxCap = queueHeadroomInfo.getQueueMaxCap();
+     Resource queueCurrentLimit;
+     synchronized (queueResourceLimitsInfo) {
+       queueCurrentLimit = queueResourceLimitsInfo.getQueueCurrentLimit();
      }
      Resource queueCap = Resources.max(resourceCalculator, lastClusterResource,
-       absoluteCapacityResource, queueMaxCap);
+       absoluteCapacityResource, queueCurrentLimit);
      return Resources.multiplyAndNormalizeUp( 
           resourceCalculator,
           queueCap, 
@@ -733,8 +738,10 @@ public class LeafQueue extends AbstractCSQueue {
   
   @Override
   public synchronized CSAssignment assignContainers(Resource clusterResource,
-      FiCaSchedulerNode node, boolean needToUnreserve) {
-
+      FiCaSchedulerNode node, boolean needToUnreserve,
+      ResourceLimits currentResourceLimits) {
+    this.currentResourceLimits = currentResourceLimits;
+    
     if(LOG.isDebugEnabled()) {
       LOG.debug("assignContainers: node=" + node.getNodeName()
         + " #applications=" + activeApplications.size());
@@ -876,9 +883,9 @@ public class LeafQueue extends AbstractCSQueue {
 
   }
 
-  private synchronized CSAssignment 
-  assignReservedContainer(FiCaSchedulerApp application, 
-      FiCaSchedulerNode node, RMContainer rmContainer, Resource clusterResource) {
+  private synchronized CSAssignment assignReservedContainer(
+      FiCaSchedulerApp application, FiCaSchedulerNode node,
+      RMContainer rmContainer, Resource clusterResource) {
     // Do we still need this reservation?
     Priority priority = rmContainer.getReservedPriority();
     if (application.getTotalRequiredResources(priority) == 0) {
@@ -895,13 +902,13 @@ public class LeafQueue extends AbstractCSQueue {
     return new CSAssignment(Resources.none(), NodeType.NODE_LOCAL);
   }
   
-  protected Resource getHeadroom(User user, Resource queueMaxCap,
+  protected Resource getHeadroom(User user, Resource queueCurrentLimit,
       Resource clusterResource, FiCaSchedulerApp application, Resource required) {
-    return getHeadroom(user, queueMaxCap, clusterResource,
+    return getHeadroom(user, queueCurrentLimit, clusterResource,
 	  computeUserLimit(application, clusterResource, required, user, null));
   }
   
-  private Resource getHeadroom(User user, Resource queueMaxCap,
+  private Resource getHeadroom(User user, Resource currentResourceLimit,
       Resource clusterResource, Resource userLimit) {
     /** 
      * Headroom is:
@@ -923,8 +930,11 @@ public class LeafQueue extends AbstractCSQueue {
     Resource headroom = 
       Resources.min(resourceCalculator, clusterResource,
         Resources.subtract(userLimit, user.getUsed()),
-        Resources.subtract(queueMaxCap, queueUsage.getUsed())
+        Resources.subtract(currentResourceLimit, queueUsage.getUsed())
         );
+    // Normalize it before return
+    headroom =
+        Resources.roundDown(resourceCalculator, headroom, minimumAllocation);
     return headroom;
   }
 
@@ -1012,23 +1022,17 @@ public class LeafQueue extends AbstractCSQueue {
     return canAssign;
   }
   
-  private Resource updateHeadroomInfo(Resource clusterResource, 
-      float absoluteMaxAvailCapacity) {
-  
-    Resource queueMaxCap = 
-      Resources.multiplyAndNormalizeDown(
-          resourceCalculator, 
-          clusterResource, 
-          absoluteMaxAvailCapacity,
-          minimumAllocation);
-
-    synchronized (queueHeadroomInfo) {
-      queueHeadroomInfo.setQueueMaxCap(queueMaxCap);
-      queueHeadroomInfo.setClusterResource(clusterResource);
-    }
-    
-    return queueMaxCap;
+  private Resource computeQueueCurrentLimitAndSetHeadroomInfo(
+      Resource clusterResource) {
+    Resource queueCurrentResourceLimit =
+        getCurrentResourceLimit(clusterResource, currentResourceLimits);
     
+    synchronized (queueResourceLimitsInfo) {
+      queueResourceLimitsInfo.setQueueCurrentLimit(queueCurrentResourceLimit);
+      queueResourceLimitsInfo.setClusterResource(clusterResource);
+    }
+
+    return queueCurrentResourceLimit;
   }
 
   @Lock({LeafQueue.class, FiCaSchedulerApp.class})
@@ -1043,28 +1047,22 @@ public class LeafQueue extends AbstractCSQueue {
         computeUserLimit(application, clusterResource, required,
             queueUser, requestedLabels);
 
-    //Max avail capacity needs to take into account usage by ancestor-siblings
-    //which are greater than their base capacity, so we are interested in "max avail"
-    //capacity
-    float absoluteMaxAvailCapacity = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, this);
-    
-    Resource queueMaxCap = 
-      updateHeadroomInfo(clusterResource, absoluteMaxAvailCapacity);
+    Resource currentResourceLimit =
+        computeQueueCurrentLimitAndSetHeadroomInfo(clusterResource);
     
     Resource headroom =
-        getHeadroom(queueUser, queueMaxCap, clusterResource, userLimit);
+        getHeadroom(queueUser, currentResourceLimit, clusterResource, userLimit);
     
     if (LOG.isDebugEnabled()) {
       LOG.debug("Headroom calculation for user " + user + ": " + 
           " userLimit=" + userLimit + 
-          " queueMaxCap=" + queueMaxCap + 
+          " queueMaxAvailRes=" + currentResourceLimit + 
           " consumed=" + queueUser.getUsed() + 
           " headroom=" + headroom);
     }
     
     CapacityHeadroomProvider headroomProvider = new CapacityHeadroomProvider(
-      queueUser, this, application, required, queueHeadroomInfo);
+      queueUser, this, application, required, queueResourceLimitsInfo);
     
     application.setHeadroomProvider(headroomProvider);
 
@@ -1249,7 +1247,7 @@ public class LeafQueue extends AbstractCSQueue {
         application.getResourceRequest(priority, node.getNodeName());
     if (nodeLocalResourceRequest != null) {
       assigned = 
-          assignNodeLocalContainers(clusterResource, nodeLocalResourceRequest, 
+          assignNodeLocalContainers(clusterResource, nodeLocalResourceRequest,
               node, application, priority, reservedContainer, needToUnreserve); 
       if (Resources.greaterThan(resourceCalculator, clusterResource, 
           assigned, Resources.none())) {
@@ -1265,8 +1263,8 @@ public class LeafQueue extends AbstractCSQueue {
         return SKIP_ASSIGNMENT;
       }
       
-      assigned = 
-          assignRackLocalContainers(clusterResource, rackLocalResourceRequest, 
+      assigned =
+          assignRackLocalContainers(clusterResource, rackLocalResourceRequest,
               node, application, priority, reservedContainer, needToUnreserve);
       if (Resources.greaterThan(resourceCalculator, clusterResource, 
           assigned, Resources.none())) {
@@ -1282,10 +1280,10 @@ public class LeafQueue extends AbstractCSQueue {
         return SKIP_ASSIGNMENT;
       }
 
-      return new CSAssignment(
-          assignOffSwitchContainers(clusterResource, offSwitchResourceRequest,
-              node, application, priority, reservedContainer, needToUnreserve), 
-              NodeType.OFF_SWITCH);
+      return new CSAssignment(assignOffSwitchContainers(clusterResource,
+          offSwitchResourceRequest, node, application, priority,
+          reservedContainer, needToUnreserve),
+          NodeType.OFF_SWITCH);
     }
     
     return SKIP_ASSIGNMENT;
@@ -1373,7 +1371,7 @@ public class LeafQueue extends AbstractCSQueue {
       ResourceRequest nodeLocalResourceRequest, FiCaSchedulerNode node,
       FiCaSchedulerApp application, Priority priority,
       RMContainer reservedContainer, boolean needToUnreserve) {
-    if (canAssign(application, priority, node, NodeType.NODE_LOCAL, 
+    if (canAssign(application, priority, node, NodeType.NODE_LOCAL,
         reservedContainer)) {
       return assignContainer(clusterResource, node, application, priority,
           nodeLocalResourceRequest, NodeType.NODE_LOCAL, reservedContainer,
@@ -1383,9 +1381,9 @@ public class LeafQueue extends AbstractCSQueue {
     return Resources.none();
   }
 
-  private Resource assignRackLocalContainers(
-      Resource clusterResource, ResourceRequest rackLocalResourceRequest,  
-      FiCaSchedulerNode node, FiCaSchedulerApp application, Priority priority,
+  private Resource assignRackLocalContainers(Resource clusterResource,
+      ResourceRequest rackLocalResourceRequest, FiCaSchedulerNode node,
+      FiCaSchedulerApp application, Priority priority,
       RMContainer reservedContainer, boolean needToUnreserve) {
     if (canAssign(application, priority, node, NodeType.RACK_LOCAL, 
         reservedContainer)) {
@@ -1397,9 +1395,9 @@ public class LeafQueue extends AbstractCSQueue {
     return Resources.none();
   }
 
-  private Resource assignOffSwitchContainers(
-      Resource clusterResource, ResourceRequest offSwitchResourceRequest,
-      FiCaSchedulerNode node, FiCaSchedulerApp application, Priority priority, 
+  private Resource assignOffSwitchContainers(Resource clusterResource,
+      ResourceRequest offSwitchResourceRequest, FiCaSchedulerNode node,
+      FiCaSchedulerApp application, Priority priority,
       RMContainer reservedContainer, boolean needToUnreserve) {
     if (canAssign(application, priority, node, NodeType.OFF_SWITCH, 
         reservedContainer)) {
@@ -1753,15 +1751,16 @@ public class LeafQueue extends AbstractCSQueue {
   }
 
   @Override
-  public synchronized void updateClusterResource(Resource clusterResource) {
+  public synchronized void updateClusterResource(Resource clusterResource,
+      ResourceLimits currentResourceLimits) {
+    this.currentResourceLimits = currentResourceLimits;
     lastClusterResource = clusterResource;
     updateAbsoluteCapacityResource(clusterResource);
     
     // Update headroom info based on new cluster resource value
     // absoluteMaxCapacity now,  will be replaced with absoluteMaxAvailCapacity
     // during allocation
-    updateHeadroomInfo(clusterResource,
-        queueCapacities.getAbsoluteMaximumCapacity());
+    computeQueueCurrentLimitAndSetHeadroomInfo(clusterResource);
     
     // Update metrics
     CSQueueUtils.updateQueueStatistics(
@@ -1951,16 +1950,16 @@ public class LeafQueue extends AbstractCSQueue {
    * Holds shared values used by all applications in
    * the queue to calculate headroom on demand
    */
-  static class QueueHeadroomInfo {
-    private Resource queueMaxCap;
+  static class QueueResourceLimitsInfo {
+    private Resource queueCurrentLimit;
     private Resource clusterResource;
     
-    public void setQueueMaxCap(Resource queueMaxCap) {
-      this.queueMaxCap = queueMaxCap;
+    public void setQueueCurrentLimit(Resource currentLimit) {
+      this.queueCurrentLimit = currentLimit;
     }
     
-    public Resource getQueueMaxCap() {
-      return queueMaxCap;
+    public Resource getQueueCurrentLimit() {
+      return queueCurrentLimit;
     }
     
     public void setClusterResource(Resource clusterResource) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
index a26b0aa..7feaa15 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
@@ -56,6 +56,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEven
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerState;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
@@ -378,8 +379,9 @@ public class ParentQueue extends AbstractCSQueue {
   }
 
   @Override
-  public synchronized CSAssignment assignContainers(
-      Resource clusterResource, FiCaSchedulerNode node, boolean needToUnreserve) {
+  public synchronized CSAssignment assignContainers(Resource clusterResource,
+      FiCaSchedulerNode node, boolean needToUnreserve,
+      ResourceLimits resourceLimits) {
     CSAssignment assignment = 
         new CSAssignment(Resources.createResource(0, 0), NodeType.NODE_LOCAL);
     Set<String> nodeLabels = node.getLabels();
@@ -408,7 +410,8 @@ public class ParentQueue extends AbstractCSQueue {
       
       // Schedule
       CSAssignment assignedToChild = 
-          assignContainersToChildQueues(clusterResource, node, localNeedToUnreserve | needToUnreserve);
+          assignContainersToChildQueues(clusterResource, node,
+              localNeedToUnreserve | needToUnreserve, resourceLimits);
       assignment.setType(assignedToChild.getType());
       
       // Done if no child-queue assigned anything
@@ -530,8 +533,29 @@ public class ParentQueue extends AbstractCSQueue {
             node.getAvailableResource(), minimumAllocation);
   }
   
-  private synchronized CSAssignment assignContainersToChildQueues(Resource cluster, 
-      FiCaSchedulerNode node, boolean needToUnreserve) {
+  private ResourceLimits getResourceLimitsOfChild(CSQueue child,
+      Resource clusterResource, ResourceLimits myLimits) {
+    /*
+     * Set head-room of a given child, limit =
+     * min(minimum-of-limit-of-this-queue-and-ancestors, this.max) - this.used
+     * + child.used. To avoid any of this queue's and its ancestors' limit
+     * being violated
+     */
+    Resource myCurrentLimit =
+        getCurrentResourceLimit(clusterResource, myLimits);
+    // My available resource = my-current-limit - my-used-resource
+    Resource myMaxAvailableResource = Resources.subtract(myCurrentLimit,
+        getUsedResources());
+    // Child's limit = my-available-resource + resource-already-used-by-child
+    Resource childLimit =
+        Resources.add(myMaxAvailableResource, child.getUsedResources());
+    
+    return new ResourceLimits(childLimit);
+  }
+  
+  private synchronized CSAssignment assignContainersToChildQueues(
+      Resource cluster, FiCaSchedulerNode node, boolean needToUnreserve,
+      ResourceLimits limits) {
     CSAssignment assignment = 
         new CSAssignment(Resources.createResource(0, 0), NodeType.NODE_LOCAL);
     
@@ -544,7 +568,14 @@ public class ParentQueue extends AbstractCSQueue {
         LOG.debug("Trying to assign to queue: " + childQueue.getQueuePath()
           + " stats: " + childQueue);
       }
-      assignment = childQueue.assignContainers(cluster, node, needToUnreserve);
+      
+      // Get ResourceLimits of child queue before assign containers
+      ResourceLimits childLimits =
+          getResourceLimitsOfChild(childQueue, cluster, limits);
+      
+      assignment =
+          childQueue.assignContainers(cluster, node, needToUnreserve,
+              childLimits);
       if(LOG.isDebugEnabled()) {
         LOG.debug("Assigned to queue: " + childQueue.getQueuePath() +
           " stats: " + childQueue + " --> " + 
@@ -638,10 +669,14 @@ public class ParentQueue extends AbstractCSQueue {
   }
 
   @Override
-  public synchronized void updateClusterResource(Resource clusterResource) {
+  public synchronized void updateClusterResource(Resource clusterResource,
+      ResourceLimits resourceLimits) {
     // Update all children
     for (CSQueue childQueue : childQueues) {
-      childQueue.updateClusterResource(clusterResource);
+      // Get ResourceLimits of child queue before assign containers
+      ResourceLimits childLimits =
+          getResourceLimitsOfChild(childQueue, clusterResource, resourceLimits);     
+      childQueue.updateClusterResource(clusterResource, childLimits);
     }
     
     // Update metrics
@@ -728,4 +763,4 @@ public class ParentQueue extends AbstractCSQueue {
   public synchronized int getNumApplications() {
     return numApplications;
   }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
index e1b8a3d..494f5a4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
@@ -23,14 +23,12 @@ import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
 import java.util.List;
 
-import org.junit.Assert;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest;
-import org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterResponse;
 import org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
@@ -45,6 +43,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptState;
 import org.apache.hadoop.yarn.util.Records;
+import org.junit.Assert;
 
 public class MockAM {
 
@@ -53,6 +52,7 @@ public class MockAM {
   private RMContext context;
   private ApplicationMasterProtocol amRMProtocol;
   private UserGroupInformation ugi;
+  private volatile AllocateResponse lastResponse;
 
   private final List<ResourceRequest> requests = new ArrayList<ResourceRequest>();
   private final List<ContainerId> releases = new ArrayList<ContainerId>();
@@ -223,7 +223,8 @@ public class MockAM {
         context.getRMApps().get(attemptId.getApplicationId())
             .getRMAppAttempt(attemptId).getAMRMToken();
     ugi.addTokenIdentifier(token.decodeIdentifier());
-    return doAllocateAs(ugi, allocateRequest);
+    lastResponse = doAllocateAs(ugi, allocateRequest);
+    return lastResponse;
   }
 
   public AllocateResponse doAllocateAs(UserGroupInformation ugi,
@@ -240,6 +241,10 @@ public class MockAM {
       throw (Exception) e.getCause();
     }
   }
+  
+  public AllocateResponse doHeartbeat() throws Exception {
+    return allocate(null, null);
+  }
 
   public void unregisterAppAttempt() throws Exception {
     waitForState(RMAppAttemptState.RUNNING);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestResourceUsage.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestResourceUsage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestResourceUsage.java
index b6dfacb..f0bf892 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestResourceUsage.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestResourceUsage.java
@@ -38,7 +38,7 @@ public class TestResourceUsage {
   @Parameterized.Parameters
   public static Collection<String[]> getParameters() {
     return Arrays.asList(new String[][] { { "Pending" }, { "Used" },
-        { "Headroom" }, { "Reserved" }, { "AMUsed" } });
+        { "Reserved" }, { "AMUsed" } });
   }
 
   public TestResourceUsage(String suffix) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
index 81a5aad..8cad057 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationLimits.java
@@ -21,15 +21,10 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.mockito.Matchers.any;
-import static org.mockito.Matchers.eq;
 import static org.mockito.Mockito.doReturn;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.spy;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
-import org.mockito.Matchers;
-import org.mockito.Mockito;
 
 import java.io.IOException;
 import java.util.ArrayList;
@@ -42,8 +37,8 @@ import java.util.concurrent.ConcurrentMap;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.Resource;
@@ -53,9 +48,10 @@ import org.apache.hadoop.yarn.factories.RecordFactory;
 import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
-import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
 import org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager;
 import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
@@ -63,7 +59,8 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
-import org.junit.Ignore;
+import org.mockito.Matchers;
+import org.mockito.Mockito;
 
 public class TestApplicationLimits {
   
@@ -171,7 +168,9 @@ public class TestApplicationLimits {
     // am limit is 4G initially (based on the queue absolute capacity)
     // when there is only 1 user, and drops to 2G (the userlimit) when there
     // is a second user
-    queue.updateClusterResource(Resource.newInstance(80 * GB, 40));
+    Resource clusterResource = Resource.newInstance(80 * GB, 40);
+    queue.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
     
     ActiveUsersManager activeUsersManager = mock(ActiveUsersManager.class);
     when(queue.getActiveUsersManager()).thenReturn(activeUsersManager);
@@ -289,7 +288,8 @@ public class TestApplicationLimits {
     
     // Add some nodes to the cluster & test new limits
     clusterResource = Resources.createResource(120 * 16 * GB);
-    root.updateClusterResource(clusterResource);
+    root.updateClusterResource(clusterResource, new ResourceLimits(
+        clusterResource));
     
     assertEquals(queue.getAMResourceLimit(), Resource.newInstance(192*GB, 1));
     assertEquals(queue.getUserAMResourceLimit(), 
@@ -611,7 +611,8 @@ public class TestApplicationLimits {
     app_0_0.updateResourceRequests(app_0_0_requests);
 
     // Schedule to compute 
-    queue.assignContainers(clusterResource, node_0, false);
+    queue.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource));
     Resource expectedHeadroom = Resources.createResource(10*16*GB, 1);
     assertEquals(expectedHeadroom, app_0_0.getHeadroom());
 
@@ -630,7 +631,8 @@ public class TestApplicationLimits {
     app_0_1.updateResourceRequests(app_0_1_requests);
 
     // Schedule to compute 
-    queue.assignContainers(clusterResource, node_0, false); // Schedule to compute
+    queue.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource)); // Schedule to compute
     assertEquals(expectedHeadroom, app_0_0.getHeadroom());
     assertEquals(expectedHeadroom, app_0_1.getHeadroom());// no change
     
@@ -649,7 +651,8 @@ public class TestApplicationLimits {
     app_1_0.updateResourceRequests(app_1_0_requests);
     
     // Schedule to compute 
-    queue.assignContainers(clusterResource, node_0, false); // Schedule to compute
+    queue.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource)); // Schedule to compute
     expectedHeadroom = Resources.createResource(10*16*GB / 2, 1); // changes
     assertEquals(expectedHeadroom, app_0_0.getHeadroom());
     assertEquals(expectedHeadroom, app_0_1.getHeadroom());
@@ -657,7 +660,8 @@ public class TestApplicationLimits {
 
     // Now reduce cluster size and check for the smaller headroom
     clusterResource = Resources.createResource(90*16*GB);
-    queue.assignContainers(clusterResource, node_0, false); // Schedule to compute
+    queue.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource)); // Schedule to compute
     expectedHeadroom = Resources.createResource(9*16*GB / 2, 1); // changes
     assertEquals(expectedHeadroom, app_0_0.getHeadroom());
     assertEquals(expectedHeadroom, app_0_1.getHeadroom());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSQueueUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSQueueUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSQueueUtils.java
deleted file mode 100644
index 5135ba9..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCSQueueUtils.java
+++ /dev/null
@@ -1,250 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity;
-
-import static org.junit.Assert.assertEquals;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.yarn.api.records.Resource;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
-import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
-import org.apache.hadoop.yarn.util.resource.DominantResourceCalculator;
-import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
-import org.apache.hadoop.yarn.util.resource.Resources;
-import org.junit.Test;
-
-public class TestCSQueueUtils {
-
-  private static final Log LOG = LogFactory.getLog(TestCSQueueUtils.class);
-
-  final static int GB = 1024;
-
-  @Test
-  public void testAbsoluteMaxAvailCapacityInvalidDivisor() throws Exception {
-    runInvalidDivisorTest(false);
-    runInvalidDivisorTest(true);
-  }
-    
-  public void runInvalidDivisorTest(boolean useDominant) throws Exception {
-  
-    ResourceCalculator resourceCalculator;
-    Resource clusterResource;
-    if (useDominant) {
-      resourceCalculator = new DominantResourceCalculator();
-      clusterResource = Resources.createResource(10, 0);
-    } else {
-      resourceCalculator = new DefaultResourceCalculator();
-      clusterResource = Resources.createResource(0, 99);
-    }
-    
-    YarnConfiguration conf = new YarnConfiguration();
-    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration();
-  
-    CapacitySchedulerContext csContext = mock(CapacitySchedulerContext.class);
-    when(csContext.getConf()).thenReturn(conf);
-    when(csContext.getConfiguration()).thenReturn(csConf);
-    when(csContext.getClusterResource()).thenReturn(clusterResource);
-    when(csContext.getResourceCalculator()).thenReturn(resourceCalculator);
-    when(csContext.getMinimumResourceCapability()).
-        thenReturn(Resources.createResource(GB, 1));
-    when(csContext.getMaximumResourceCapability()).
-        thenReturn(Resources.createResource(0, 0));
-    RMContext rmContext = TestUtils.getMockRMContext();
-    when(csContext.getRMContext()).thenReturn(rmContext);
-  
-    final String L1Q1 = "L1Q1";
-    csConf.setQueues(CapacitySchedulerConfiguration.ROOT, new String[] {L1Q1});
-    
-    final String L1Q1P = CapacitySchedulerConfiguration.ROOT + "." + L1Q1;
-    csConf.setCapacity(L1Q1P, 90);
-    csConf.setMaximumCapacity(L1Q1P, 90);
-    
-    ParentQueue root = new ParentQueue(csContext, 
-        CapacitySchedulerConfiguration.ROOT, null, null);
-    LeafQueue l1q1 = new LeafQueue(csContext, L1Q1, root, null);
-    
-    LOG.info("t1 root " + CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, root));
-    
-    LOG.info("t1 l1q1 " + CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l1q1));
-    
-    assertEquals(0.0f, CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l1q1), 0.000001f);
-    
-  }
-  
-  @Test
-  public void testAbsoluteMaxAvailCapacityNoUse() throws Exception {
-    
-    ResourceCalculator resourceCalculator = new DefaultResourceCalculator();
-    Resource clusterResource = Resources.createResource(100 * 16 * GB, 100 * 32);
-    
-    YarnConfiguration conf = new YarnConfiguration();
-    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration();
-    
-    CapacitySchedulerContext csContext = mock(CapacitySchedulerContext.class);
-    when(csContext.getConf()).thenReturn(conf);
-    when(csContext.getConfiguration()).thenReturn(csConf);
-    when(csContext.getClusterResource()).thenReturn(clusterResource);
-    when(csContext.getResourceCalculator()).thenReturn(resourceCalculator);
-    when(csContext.getMinimumResourceCapability()).
-        thenReturn(Resources.createResource(GB, 1));
-    when(csContext.getMaximumResourceCapability()).
-        thenReturn(Resources.createResource(16*GB, 32));
-    RMContext rmContext = TestUtils.getMockRMContext();
-    when(csContext.getRMContext()).thenReturn(rmContext);
-    
-    final String L1Q1 = "L1Q1";
-    csConf.setQueues(CapacitySchedulerConfiguration.ROOT, new String[] {L1Q1});
-    
-    final String L1Q1P = CapacitySchedulerConfiguration.ROOT + "." + L1Q1;
-    csConf.setCapacity(L1Q1P, 90);
-    csConf.setMaximumCapacity(L1Q1P, 90);
-    
-    ParentQueue root = new ParentQueue(csContext, 
-        CapacitySchedulerConfiguration.ROOT, null, null);
-    LeafQueue l1q1 = new LeafQueue(csContext, L1Q1, root, null);
-    
-    LOG.info("t1 root " + CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, root));
-    
-    LOG.info("t1 l1q1 " + CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l1q1));
-    
-    assertEquals(1.0f, CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, root), 0.000001f);
-    
-    assertEquals(0.9f, CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l1q1), 0.000001f);
-    
-  }
-  
-  @Test
-  public void testAbsoluteMaxAvailCapacityWithUse() throws Exception {
-    
-    ResourceCalculator resourceCalculator = new DefaultResourceCalculator();
-    Resource clusterResource = Resources.createResource(100 * 16 * GB, 100 * 32);
-    
-    YarnConfiguration conf = new YarnConfiguration();
-    CapacitySchedulerConfiguration csConf = new CapacitySchedulerConfiguration();
-    
-    CapacitySchedulerContext csContext = mock(CapacitySchedulerContext.class);
-    when(csContext.getConf()).thenReturn(conf);
-    when(csContext.getConfiguration()).thenReturn(csConf);
-    when(csContext.getClusterResource()).thenReturn(clusterResource);
-    when(csContext.getResourceCalculator()).thenReturn(resourceCalculator);
-    when(csContext.getMinimumResourceCapability()).
-        thenReturn(Resources.createResource(GB, 1));
-    when(csContext.getMaximumResourceCapability()).
-        thenReturn(Resources.createResource(16*GB, 32));
-    
-    RMContext rmContext = TestUtils.getMockRMContext();
-    when(csContext.getRMContext()).thenReturn(rmContext);
-    
-    final String L1Q1 = "L1Q1";
-    final String L1Q2 = "L1Q2";
-    final String L2Q1 = "L2Q1";
-    final String L2Q2 = "L2Q2";
-    csConf.setQueues(CapacitySchedulerConfiguration.ROOT, new String[] {L1Q1, L1Q2,
-                     L2Q1, L2Q2});
-    
-    final String L1Q1P = CapacitySchedulerConfiguration.ROOT + "." + L1Q1;
-    csConf.setCapacity(L1Q1P, 80);
-    csConf.setMaximumCapacity(L1Q1P, 80);
-    
-    final String L1Q2P = CapacitySchedulerConfiguration.ROOT + "." + L1Q2;
-    csConf.setCapacity(L1Q2P, 20);
-    csConf.setMaximumCapacity(L1Q2P, 100);
-    
-    final String L2Q1P = L1Q1P + "." + L2Q1;
-    csConf.setCapacity(L2Q1P, 50);
-    csConf.setMaximumCapacity(L2Q1P, 50);
-    
-    final String L2Q2P = L1Q1P + "." + L2Q2;
-    csConf.setCapacity(L2Q2P, 50);
-    csConf.setMaximumCapacity(L2Q2P, 50);
-    
-    float result;
-    
-    ParentQueue root = new ParentQueue(csContext, 
-        CapacitySchedulerConfiguration.ROOT, null, null);
-    
-    LeafQueue l1q1 = new LeafQueue(csContext, L1Q1, root, null);
-    LeafQueue l1q2 = new LeafQueue(csContext, L1Q2, root, null);
-    LeafQueue l2q2 = new LeafQueue(csContext, L2Q2, l1q1, null);
-    LeafQueue l2q1 = new LeafQueue(csContext, L2Q1, l1q1, null);
-    
-    //no usage, all based on maxCapacity (prior behavior)
-    result = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l2q2);
-    assertEquals( 0.4f, result, 0.000001f);
-    LOG.info("t2 l2q2 " + result);
-    
-    //some usage, but below the base capacity
-    root.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.1f));
-    l1q2.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.1f));
-    result = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l2q2);
-    assertEquals( 0.4f, result, 0.000001f);
-    LOG.info("t2 l2q2 " + result);
-    
-    //usage gt base on parent sibling
-    root.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.3f));
-    l1q2.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.3f));
-    result = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l2q2);
-    assertEquals( 0.3f, result, 0.000001f);
-    LOG.info("t2 l2q2 " + result);
-    
-    //same as last, but with usage also on direct parent
-    root.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.1f));
-    l1q1.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.1f));
-    result = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l2q2);
-    assertEquals( 0.3f, result, 0.000001f);
-    LOG.info("t2 l2q2 " + result);
-    
-    //add to direct sibling, below the threshold of effect at present
-    root.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.2f));
-    l1q1.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.2f));
-    l2q1.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.2f));
-    result = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l2q2);
-    assertEquals( 0.3f, result, 0.000001f);
-    LOG.info("t2 l2q2 " + result);
-    
-    //add to direct sibling, now above the threshold of effect
-    //(it's cumulative with prior tests)
-    root.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.2f));
-    l1q1.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.2f));
-    l2q1.getQueueResourceUsage().incUsed(Resources.multiply(clusterResource, 0.2f));
-    result = CSQueueUtils.getAbsoluteMaxAvailCapacity(
-      resourceCalculator, clusterResource, l2q2);
-    assertEquals( 0.1f, result, 0.000001f);
-    LOG.info("t2 l2q2 " + result);
-    
-    
-  }
-
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
index fabf47d..83ab104 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
@@ -87,6 +87,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization.MockRMW
 import org.apache.hadoop.yarn.server.resourcemanager.TestAMAuthorization.MyContainerManager;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.NullRMNodeLabelsManager;
 import org.apache.hadoop.yarn.server.resourcemanager.nodelabels.RMNodeLabelsManager;
+import org.apache.hadoop.yarn.server.resourcemanager.recovery.MemoryRMStateStore;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
@@ -359,7 +360,8 @@ public class TestCapacityScheduler {
     resourceManager.getResourceScheduler().handle(nodeUpdate);
   }
   
-  private void setupQueueConfiguration(CapacitySchedulerConfiguration conf) {
+  private CapacitySchedulerConfiguration setupQueueConfiguration(
+      CapacitySchedulerConfiguration conf) {
     
     // Define top-level queues
     conf.setQueues(CapacitySchedulerConfiguration.ROOT, new String[] {"a", "b"});
@@ -383,6 +385,7 @@ public class TestCapacityScheduler {
     conf.setUserLimitFactor(B3, 100.0f);
 
     LOG.info("Setup top-level queues a and b");
+    return conf;
   }
   
   @Test
@@ -2400,6 +2403,86 @@ public class TestCapacityScheduler {
     assertEquals("queue B2 max vcores allocation", 12,
         ((LeafQueue) queueB2).getMaximumAllocation().getVirtualCores());
   }
+  
+  private void waitContainerAllocated(MockAM am, int mem, int nContainer,
+      int startContainerId, MockRM rm, MockNM nm) throws Exception {
+    for (int cId = startContainerId; cId < startContainerId + nContainer; cId++) {
+      am.allocate("*", mem, 1, new ArrayList<ContainerId>());
+      ContainerId containerId =
+          ContainerId.newContainerId(am.getApplicationAttemptId(), cId);
+      Assert.assertTrue(rm.waitForState(nm, containerId,
+          RMContainerState.ALLOCATED, 10 * 1000));
+    }
+  }
+
+  @Test
+  public void testHierarchyQueuesCurrentLimits() throws Exception {
+    /*
+     * Queue tree:
+     *          Root
+     *        /     \
+     *       A       B
+     *      / \    / | \
+     *     A1 A2  B1 B2 B3
+     */
+    YarnConfiguration conf =
+        new YarnConfiguration(
+            setupQueueConfiguration(new CapacitySchedulerConfiguration()));
+    conf.setBoolean(CapacitySchedulerConfiguration.ENABLE_USER_METRICS, true);
+
+    MemoryRMStateStore memStore = new MemoryRMStateStore();
+    memStore.init(conf);
+    MockRM rm1 = new MockRM(conf, memStore);
+    rm1.start();
+    MockNM nm1 =
+        new MockNM("127.0.0.1:1234", 100 * GB, rm1.getResourceTrackerService());
+    nm1.registerNode();
+    
+    RMApp app1 = rm1.submitApp(1 * GB, "app", "user", null, "b1");
+    MockAM am1 = MockRM.launchAndRegisterAM(app1, rm1, nm1);
+    
+    waitContainerAllocated(am1, 1 * GB, 1, 2, rm1, nm1);
+
+    // Maximum resoure of b1 is 100 * 0.895 * 0.792 = 71 GB
+    // 2 GBs used by am, so it's 71 - 2 = 69G.
+    Assert.assertEquals(69 * GB,
+        am1.doHeartbeat().getAvailableResources().getMemory());
+    
+    RMApp app2 = rm1.submitApp(1 * GB, "app", "user", null, "b2");
+    MockAM am2 = MockRM.launchAndRegisterAM(app2, rm1, nm1);
+    
+    // Allocate 5 containers, each one is 8 GB in am2 (40 GB in total)
+    waitContainerAllocated(am2, 8 * GB, 5, 2, rm1, nm1);
+    
+    // Allocated one more container with 1 GB resource in b1
+    waitContainerAllocated(am1, 1 * GB, 1, 3, rm1, nm1);
+    
+    // Total is 100 GB, 
+    // B2 uses 41 GB (5 * 8GB containers and 1 AM container)
+    // B1 uses 3 GB (2 * 1GB containers and 1 AM container)
+    // Available is 100 - 41 - 3 = 56 GB
+    Assert.assertEquals(56 * GB,
+        am1.doHeartbeat().getAvailableResources().getMemory());
+    
+    // Now we submit app3 to a1 (in higher level hierarchy), to see if headroom
+    // of app1 (in queue b1) updated correctly
+    RMApp app3 = rm1.submitApp(1 * GB, "app", "user", null, "a1");
+    MockAM am3 = MockRM.launchAndRegisterAM(app3, rm1, nm1);
+    
+    // Allocate 3 containers, each one is 8 GB in am3 (24 GB in total)
+    waitContainerAllocated(am3, 8 * GB, 3, 2, rm1, nm1);
+    
+    // Allocated one more container with 4 GB resource in b1
+    waitContainerAllocated(am1, 1 * GB, 1, 4, rm1, nm1);
+    
+    // Total is 100 GB, 
+    // B2 uses 41 GB (5 * 8GB containers and 1 AM container)
+    // B1 uses 4 GB (3 * 1GB containers and 1 AM container)
+    // A1 uses 25 GB (3 * 8GB containers and 1 AM container)
+    // Available is 100 - 41 - 4 - 25 = 30 GB
+    Assert.assertEquals(30 * GB,
+        am1.doHeartbeat().getAvailableResources().getMemory());
+  }
 
   private void setMaxAllocMb(Configuration conf, int maxAllocMb) {
     conf.setInt(YarnConfiguration.RM_SCHEDULER_MAXIMUM_ALLOCATION_MB,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
index af58a43..7edb17d 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestChildQueueOrder.java
@@ -50,6 +50,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEventType;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.utils.BuilderUtils;
@@ -143,7 +144,9 @@ public class TestChildQueueOrder {
         // Next call - nothing
         if (allocation > 0) {
           doReturn(new CSAssignment(Resources.none(), type)).
-          when(queue).assignContainers(eq(clusterResource), eq(node), anyBoolean());
+          when(queue)
+              .assignContainers(eq(clusterResource), eq(node), anyBoolean(),
+                  any(ResourceLimits.class));
 
           // Mock the node's resource availability
           Resource available = node.getAvailableResource();
@@ -154,7 +157,8 @@ public class TestChildQueueOrder {
         return new CSAssignment(allocatedResource, type);
       }
     }).
-    when(queue).assignContainers(eq(clusterResource), eq(node), anyBoolean());
+    when(queue).assignContainers(eq(clusterResource), eq(node), anyBoolean(), 
+        any(ResourceLimits.class));
     doNothing().when(node).releaseContainer(any(Container.class));
   }
 
@@ -270,14 +274,16 @@ public class TestChildQueueOrder {
     stubQueueAllocation(b, clusterResource, node_0, 0*GB);
     stubQueueAllocation(c, clusterResource, node_0, 0*GB);
     stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource));
     for(int i=0; i < 2; i++)
     {
       stubQueueAllocation(a, clusterResource, node_0, 0*GB);
       stubQueueAllocation(b, clusterResource, node_0, 1*GB);
       stubQueueAllocation(c, clusterResource, node_0, 0*GB);
       stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-      root.assignContainers(clusterResource, node_0, false);
+      root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+          clusterResource));
     } 
     for(int i=0; i < 3; i++)
     {
@@ -285,7 +291,8 @@ public class TestChildQueueOrder {
       stubQueueAllocation(b, clusterResource, node_0, 0*GB);
       stubQueueAllocation(c, clusterResource, node_0, 1*GB);
       stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-      root.assignContainers(clusterResource, node_0, false);
+      root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+          clusterResource));
     }  
     for(int i=0; i < 4; i++)
     {
@@ -293,7 +300,8 @@ public class TestChildQueueOrder {
       stubQueueAllocation(b, clusterResource, node_0, 0*GB);
       stubQueueAllocation(c, clusterResource, node_0, 0*GB);
       stubQueueAllocation(d, clusterResource, node_0, 1*GB);
-      root.assignContainers(clusterResource, node_0, false);
+      root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+          clusterResource));
     }    
     verifyQueueMetrics(a, 1*GB, clusterResource);
     verifyQueueMetrics(b, 2*GB, clusterResource);
@@ -326,7 +334,8 @@ public class TestChildQueueOrder {
       stubQueueAllocation(b, clusterResource, node_0, 0*GB);
       stubQueueAllocation(c, clusterResource, node_0, 0*GB);
       stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-      root.assignContainers(clusterResource, node_0, false);
+      root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+          clusterResource));
     }
     verifyQueueMetrics(a, 3*GB, clusterResource);
     verifyQueueMetrics(b, 2*GB, clusterResource);
@@ -353,7 +362,8 @@ public class TestChildQueueOrder {
     stubQueueAllocation(b, clusterResource, node_0, 1*GB);
     stubQueueAllocation(c, clusterResource, node_0, 0*GB);
     stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource));
     verifyQueueMetrics(a, 2*GB, clusterResource);
     verifyQueueMetrics(b, 3*GB, clusterResource);
     verifyQueueMetrics(c, 3*GB, clusterResource);
@@ -379,7 +389,8 @@ public class TestChildQueueOrder {
     stubQueueAllocation(b, clusterResource, node_0, 0*GB);
     stubQueueAllocation(c, clusterResource, node_0, 0*GB);
     stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource));
     verifyQueueMetrics(a, 3*GB, clusterResource);
     verifyQueueMetrics(b, 2*GB, clusterResource);
     verifyQueueMetrics(c, 3*GB, clusterResource);
@@ -393,12 +404,13 @@ public class TestChildQueueOrder {
     stubQueueAllocation(b, clusterResource, node_0, 1*GB);
     stubQueueAllocation(c, clusterResource, node_0, 0*GB);
     stubQueueAllocation(d, clusterResource, node_0, 1*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, new ResourceLimits(
+        clusterResource));
     InOrder allocationOrder = inOrder(d,b);
     allocationOrder.verify(d).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), any(ResourceLimits.class));
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), any(ResourceLimits.class));
     verifyQueueMetrics(a, 3*GB, clusterResource);
     verifyQueueMetrics(b, 2*GB, clusterResource);
     verifyQueueMetrics(c, 3*GB, clusterResource);


[04/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
new file mode 100644
index 0000000..5e4df9f
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WritingYarnApplications.md
@@ -0,0 +1,591 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop: Writing YARN Applications
+=================================
+
+* [Purpose](#Purpose)
+* [Concepts and Flow](#Concepts_and_Flow)
+* [Interfaces](#Interfaces)
+* [Writing a Simple Yarn Application](#Writing_a_Simple_Yarn_Application)
+    * [Writing a simple Client](#Writing_a_simple_Client)
+    * [Writing an ApplicationMaster (AM)](#Writing_an_ApplicationMaster_AM)
+* [FAQ](#FAQ)
+    * [How can I distribute my application's jars to all of the nodes in the YARN cluster that need it?](#How_can_I_distribute_my_applications_jars_to_all_of_the_nodes_in_the_YARN_cluster_that_need_it)
+    * [How do I get the ApplicationMaster's ApplicationAttemptId?](#How_do_I_get_the_ApplicationMasters_ApplicationAttemptId)
+    * [Why my container is killed by the NodeManager?](#Why_my_container_is_killed_by_the_NodeManager)
+    * [How do I include native libraries?](#How_do_I_include_native_libraries)
+* [Useful Links](#Useful_Links)
+* [Sample Code](#Sample_Code)
+
+Purpose
+-------
+
+This document describes, at a high-level, the way to implement new Applications for YARN.
+
+Concepts and Flow
+-----------------
+
+The general concept is that an *application submission client* submits an *application* to the YARN *ResourceManager* (RM). This can be done through setting up a `YarnClient` object. After `YarnClient` is started, the client can then set up application context, prepare the very first container of the application that contains the *ApplicationMaster* (AM), and then submit the application. You need to provide information such as the details about the local files/jars that need to be available for your application to run, the actual command that needs to be executed (with the necessary command line arguments), any OS environment settings (optional), etc. Effectively, you need to describe the Unix process(es) that needs to be launched for your ApplicationMaster.
+
+The YARN ResourceManager will then launch the ApplicationMaster (as specified) on an allocated container. The ApplicationMaster communicates with YARN cluster, and handles application execution. It performs operations in an asynchronous fashion. During application launch time, the main tasks of the ApplicationMaster are: a) communicating with the ResourceManager to negotiate and allocate resources for future containers, and b) after container allocation, communicating YARN *NodeManager*s (NMs) to launch application containers on them. Task a) can be performed asynchronously through an `AMRMClientAsync` object, with event handling methods specified in a `AMRMClientAsync.CallbackHandler` type of event handler. The event handler needs to be set to the client explicitly. Task b) can be performed by launching a runnable object that then launches containers when there are containers allocated. As part of launching this container, the AM has to specify the `ContainerLaunchContext` that has
  the launch information such as command line specification, environment, etc.
+
+During the execution of an application, the ApplicationMaster communicates NodeManagers through `NMClientAsync` object. All container events are handled by `NMClientAsync.CallbackHandler`, associated with `NMClientAsync`. A typical callback handler handles client start, stop, status update and error. ApplicationMaster also reports execution progress to ResourceManager by handling the `getProgress()` method of `AMRMClientAsync.CallbackHandler`.
+
+Other than asynchronous clients, there are synchronous versions for certain workflows (`AMRMClient` and `NMClient`). The asynchronous clients are recommended because of (subjectively) simpler usages, and this article will mainly cover the asynchronous clients. Please refer to `AMRMClient` and `NMClient` for more information on synchronous clients.
+
+Interfaces
+----------
+
+Following are the important interfaces:
+
+* **Client**\<-\->**ResourceManager** 
+    
+    By using `YarnClient` objects.
+
+* **ApplicationMaster**\<-\->**ResourceManager**
+
+    By using `AMRMClientAsync` objects, handling events asynchronously by `AMRMClientAsync.CallbackHandler`
+
+* **ApplicationMaster**\<-\->**NodeManager**
+
+    Launch containers. Communicate with NodeManagers by using `NMClientAsync` objects, handling container events by `NMClientAsync.CallbackHandler`
+
+**Note**
+
+* The three main protocols for YARN application (ApplicationClientProtocol, ApplicationMasterProtocol and ContainerManagementProtocol) are still preserved. The 3 clients wrap these 3 protocols to provide simpler programming model for YARN applications.
+
+* Under very rare circumstances, programmer may want to directly use the 3 protocols to implement an application. However, note that *such behaviors are no longer encouraged for general use cases*.
+
+Writing a Simple Yarn Application
+---------------------------------
+
+### Writing a simple Client
+
+* The first step that a client needs to do is to initialize and start a YarnClient.
+
+          YarnClient yarnClient = YarnClient.createYarnClient();
+          yarnClient.init(conf);
+          yarnClient.start();
+
+* Once a client is set up, the client needs to create an application, and get its application id.
+
+          YarnClientApplication app = yarnClient.createApplication();
+          GetNewApplicationResponse appResponse = app.getNewApplicationResponse();
+
+* The response from the `YarnClientApplication` for a new application also contains information about the cluster such as the minimum/maximum resource capabilities of the cluster. This is required so that to ensure that you can correctly set the specifications of the container in which the ApplicationMaster would be launched. Please refer to `GetNewApplicationResponse` for more details.
+
+* The main crux of a client is to setup the `ApplicationSubmissionContext` which defines all the information needed by the RM to launch the AM. A client needs to set the following into the context:
+
+  * Application info: id, name
+
+  * Queue, priority info: Queue to which the application will be submitted, the priority to be assigned for the application.
+
+  * User: The user submitting the application
+
+  * `ContainerLaunchContext`: The information defining the container in which the AM will be launched and run. The `ContainerLaunchContext`, as mentioned previously, defines all the required information needed to run the application such as the local **R**esources (binaries, jars, files etc.), **E**nvironment settings (CLASSPATH etc.), the **C**ommand to be executed and security **T**okens (*RECT*).
+
+```java
+// set the application submission context
+ApplicationSubmissionContext appContext = app.getApplicationSubmissionContext();
+ApplicationId appId = appContext.getApplicationId();
+
+appContext.setKeepContainersAcrossApplicationAttempts(keepContainers);
+appContext.setApplicationName(appName);
+
+// set local resources for the application master
+// local files or archives as needed
+// In this scenario, the jar file for the application master is part of the local resources
+Map<String, LocalResource> localResources = new HashMap<String, LocalResource>();
+
+LOG.info("Copy App Master jar from local filesystem and add to local environment");
+// Copy the application master jar to the filesystem
+// Create a local resource to point to the destination jar path
+FileSystem fs = FileSystem.get(conf);
+addToLocalResources(fs, appMasterJar, appMasterJarPath, appId.toString(),
+    localResources, null);
+
+// Set the log4j properties if needed
+if (!log4jPropFile.isEmpty()) {
+  addToLocalResources(fs, log4jPropFile, log4jPath, appId.toString(),
+      localResources, null);
+}
+
+// The shell script has to be made available on the final container(s)
+// where it will be executed.
+// To do this, we need to first copy into the filesystem that is visible
+// to the yarn framework.
+// We do not need to set this as a local resource for the application
+// master as the application master does not need it.
+String hdfsShellScriptLocation = "";
+long hdfsShellScriptLen = 0;
+long hdfsShellScriptTimestamp = 0;
+if (!shellScriptPath.isEmpty()) {
+  Path shellSrc = new Path(shellScriptPath);
+  String shellPathSuffix =
+      appName + "/" + appId.toString() + "/" + SCRIPT_PATH;
+  Path shellDst =
+      new Path(fs.getHomeDirectory(), shellPathSuffix);
+  fs.copyFromLocalFile(false, true, shellSrc, shellDst);
+  hdfsShellScriptLocation = shellDst.toUri().toString();
+  FileStatus shellFileStatus = fs.getFileStatus(shellDst);
+  hdfsShellScriptLen = shellFileStatus.getLen();
+  hdfsShellScriptTimestamp = shellFileStatus.getModificationTime();
+}
+
+if (!shellCommand.isEmpty()) {
+  addToLocalResources(fs, null, shellCommandPath, appId.toString(),
+      localResources, shellCommand);
+}
+
+if (shellArgs.length > 0) {
+  addToLocalResources(fs, null, shellArgsPath, appId.toString(),
+      localResources, StringUtils.join(shellArgs, " "));
+}
+
+// Set the env variables to be setup in the env where the application master will be run
+LOG.info("Set the environment for the application master");
+Map<String, String> env = new HashMap<String, String>();
+
+// put location of shell script into env
+// using the env info, the application master will create the correct local resource for the
+// eventual containers that will be launched to execute the shell scripts
+env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTLOCATION, hdfsShellScriptLocation);
+env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTTIMESTAMP, Long.toString(hdfsShellScriptTimestamp));
+env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTLEN, Long.toString(hdfsShellScriptLen));
+
+// Add AppMaster.jar location to classpath
+// At some point we should not be required to add
+// the hadoop specific classpaths to the env.
+// It should be provided out of the box.
+// For now setting all required classpaths including
+// the classpath to "." for the application jar
+StringBuilder classPathEnv = new StringBuilder(Environment.CLASSPATH.$$())
+  .append(ApplicationConstants.CLASS_PATH_SEPARATOR).append("./*");
+for (String c : conf.getStrings(
+    YarnConfiguration.YARN_APPLICATION_CLASSPATH,
+    YarnConfiguration.DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH)) {
+  classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR);
+  classPathEnv.append(c.trim());
+}
+classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR).append(
+  "./log4j.properties");
+
+// Set the necessary command to execute the application master
+Vector<CharSequence> vargs = new Vector<CharSequence>(30);
+
+// Set java executable command
+LOG.info("Setting up app master command");
+vargs.add(Environment.JAVA_HOME.$$() + "/bin/java");
+// Set Xmx based on am memory size
+vargs.add("-Xmx" + amMemory + "m");
+// Set class name
+vargs.add(appMasterMainClass);
+// Set params for Application Master
+vargs.add("--container_memory " + String.valueOf(containerMemory));
+vargs.add("--container_vcores " + String.valueOf(containerVirtualCores));
+vargs.add("--num_containers " + String.valueOf(numContainers));
+vargs.add("--priority " + String.valueOf(shellCmdPriority));
+
+for (Map.Entry<String, String> entry : shellEnv.entrySet()) {
+  vargs.add("--shell_env " + entry.getKey() + "=" + entry.getValue());
+}
+if (debugFlag) {
+  vargs.add("--debug");
+}
+
+vargs.add("1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/AppMaster.stdout");
+vargs.add("2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/AppMaster.stderr");
+
+// Get final commmand
+StringBuilder command = new StringBuilder();
+for (CharSequence str : vargs) {
+  command.append(str).append(" ");
+}
+
+LOG.info("Completed setting up app master command " + command.toString());
+List<String> commands = new ArrayList<String>();
+commands.add(command.toString());
+
+// Set up the container launch context for the application master
+ContainerLaunchContext amContainer = ContainerLaunchContext.newInstance(
+  localResources, env, commands, null, null, null);
+
+// Set up resource type requirements
+// For now, both memory and vcores are supported, so we set memory and
+// vcores requirements
+Resource capability = Resource.newInstance(amMemory, amVCores);
+appContext.setResource(capability);
+
+// Service data is a binary blob that can be passed to the application
+// Not needed in this scenario
+// amContainer.setServiceData(serviceData);
+
+// Setup security tokens
+if (UserGroupInformation.isSecurityEnabled()) {
+  // Note: Credentials class is marked as LimitedPrivate for HDFS and MapReduce
+  Credentials credentials = new Credentials();
+  String tokenRenewer = conf.get(YarnConfiguration.RM_PRINCIPAL);
+  if (tokenRenewer == null | | tokenRenewer.length() == 0) {
+    throw new IOException(
+      "Can't get Master Kerberos principal for the RM to use as renewer");
+  }
+
+  // For now, only getting tokens for the default file-system.
+  final Token<?> tokens[] =
+      fs.addDelegationTokens(tokenRenewer, credentials);
+  if (tokens != null) {
+    for (Token<?> token : tokens) {
+      LOG.info("Got dt for " + fs.getUri() + "; " + token);
+    }
+  }
+  DataOutputBuffer dob = new DataOutputBuffer();
+  credentials.writeTokenStorageToStream(dob);
+  ByteBuffer fsTokens = ByteBuffer.wrap(dob.getData(), 0, dob.getLength());
+  amContainer.setTokens(fsTokens);
+}
+
+appContext.setAMContainerSpec(amContainer);
+```
+
+* After the setup process is complete, the client is ready to submit the application with specified priority and queue.
+
+```java
+// Set the priority for the application master
+Priority pri = Priority.newInstance(amPriority);
+appContext.setPriority(pri);
+
+// Set the queue to which this application is to be submitted in the RM
+appContext.setQueue(amQueue);
+
+// Submit the application to the applications manager
+// SubmitApplicationResponse submitResp = applicationsManager.submitApplication(appRequest);
+
+yarnClient.submitApplication(appContext);
+```
+
+* At this point, the RM will have accepted the application and in the background, will go through the process of allocating a container with the required specifications and then eventually setting up and launching the AM on the allocated container.
+
+* There are multiple ways a client can track progress of the actual task.
+   
+> * It can communicate with the RM and request for a report of the application via the `getApplicationReport()` method of `YarnClient`.
+
+```java
+// Get application report for the appId we are interested in
+ApplicationReport report = yarnClient.getApplicationReport(appId);
+```
+  
+> The ApplicationReport received from the RM consists of the following:
+
+>> * *General application information*: Application id, queue to which the application was submitted, user who submitted the application and the start time for the application.
+
+>> * *ApplicationMaster details*: the host on which the AM is running, the rpc port (if any) on which it is listening for requests from clients and a token that the client needs to communicate with the AM.
+
+>> * *Application tracking information*: If the application supports some form of progress tracking, it can set a tracking url which is available via `ApplicationReport`'s `getTrackingUrl()` method that a client can look at to monitor progress.
+
+>> * *Application status*: The state of the application as seen by the ResourceManager is available via `ApplicationReport#getYarnApplicationState`. If the `YarnApplicationState` is set to `FINISHED`, the client should refer to `ApplicationReport#getFinalApplicationStatus` to check for the actual success/failure of the application task itself. In case of failures, `ApplicationReport#getDiagnostics` may be useful to shed some more light on the the failure.
+
+> * If the ApplicationMaster supports it, a client can directly query the AM itself for progress updates via the host:rpcport information obtained from the application report. It can also use the tracking url obtained from the report if available.
+
+* In certain situations, if the application is taking too long or due to other factors, the client may wish to kill the application. `YarnClient` supports the `killApplication` call that allows a client to send a kill signal to the AM via the ResourceManager. An ApplicationMaster if so designed may also support an abort call via its rpc layer that a client may be able to leverage.
+
+          yarnClient.killApplication(appId);
+
+### Writing an ApplicationMaster (AM)
+
+* The AM is the actual owner of the job. It will be launched by the RM and via the client will be provided all the necessary information and resources about the job that it has been tasked with to oversee and complete.
+
+* As the AM is launched within a container that may (likely will) be sharing a physical host with other containers, given the multi-tenancy nature, amongst other issues, it cannot make any assumptions of things like pre-configured ports that it can listen on.
+
+* When the AM starts up, several parameters are made available to it via the environment. These include the `ContainerId` for the AM container, the application submission time and details about the NM (NodeManager) host running the ApplicationMaster. Ref `ApplicationConstants` for parameter names.
+
+* All interactions with the RM require an `ApplicationAttemptId` (there can be multiple attempts per application in case of failures). The `ApplicationAttemptId` can be obtained from the AM's container id. There are helper APIs to convert the value obtained from the environment into objects.
+
+```java
+Map<String, String> envs = System.getenv();
+String containerIdString =
+    envs.get(ApplicationConstants.AM_CONTAINER_ID_ENV);
+if (containerIdString == null) {
+  // container id should always be set in the env by the framework
+  throw new IllegalArgumentException(
+      "ContainerId not set in the environment");
+}
+ContainerId containerId = ConverterUtils.toContainerId(containerIdString);
+ApplicationAttemptId appAttemptID = containerId.getApplicationAttemptId();
+```
+
+* After an AM has initialized itself completely, we can start the two clients: one to ResourceManager, and one to NodeManagers. We set them up with our customized event handler, and we will talk about those event handlers in detail later in this article.
+
+```java
+  AMRMClientAsync.CallbackHandler allocListener = new RMCallbackHandler();
+  amRMClient = AMRMClientAsync.createAMRMClientAsync(1000, allocListener);
+  amRMClient.init(conf);
+  amRMClient.start();
+
+  containerListener = createNMCallbackHandler();
+  nmClientAsync = new NMClientAsyncImpl(containerListener);
+  nmClientAsync.init(conf);
+  nmClientAsync.start();
+```
+
+* The AM has to emit heartbeats to the RM to keep it informed that the AM is alive and still running. The timeout expiry interval at the RM is defined by a config setting accessible via `YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS` with the default being defined by `YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS`. The ApplicationMaster needs to register itself with the ResourceManager to start hearbeating.
+
+```java
+// Register self with ResourceManager
+// This will start heartbeating to the RM
+appMasterHostname = NetUtils.getHostname();
+RegisterApplicationMasterResponse response = amRMClient
+    .registerApplicationMaster(appMasterHostname, appMasterRpcPort,
+        appMasterTrackingUrl);
+```
+
+* In the response of the registration, maximum resource capability if included. You may want to use this to check the application's request.
+
+```java
+// Dump out information about cluster capability as seen by the
+// resource manager
+int maxMem = response.getMaximumResourceCapability().getMemory();
+LOG.info("Max mem capabililty of resources in this cluster " + maxMem);
+
+int maxVCores = response.getMaximumResourceCapability().getVirtualCores();
+LOG.info("Max vcores capabililty of resources in this cluster " + maxVCores);
+
+// A resource ask cannot exceed the max.
+if (containerMemory > maxMem) {
+  LOG.info("Container memory specified above max threshold of cluster."
+      + " Using max value." + ", specified=" + containerMemory + ", max="
+      + maxMem);
+  containerMemory = maxMem;
+}
+
+if (containerVirtualCores > maxVCores) {
+  LOG.info("Container virtual cores specified above max threshold of  cluster."
+    + " Using max value." + ", specified=" + containerVirtualCores + ", max="
+    + maxVCores);
+  containerVirtualCores = maxVCores;
+}
+List<Container> previousAMRunningContainers =
+    response.getContainersFromPreviousAttempts();
+LOG.info("Received " + previousAMRunningContainers.size()
+        + " previous AM's running containers on AM registration.");
+```
+
+* Based on the task requirements, the AM can ask for a set of containers to run its tasks on. We can now calculate how many containers we need, and request those many containers.
+
+```java
+List<Container> previousAMRunningContainers =
+    response.getContainersFromPreviousAttempts();
+List<Container> previousAMRunningContainers =
+    response.getContainersFromPreviousAttempts();
+LOG.info("Received " + previousAMRunningContainers.size()
+    + " previous AM's running containers on AM registration.");
+
+int numTotalContainersToRequest =
+    numTotalContainers - previousAMRunningContainers.size();
+// Setup ask for containers from RM
+// Send request for containers to RM
+// Until we get our fully allocated quota, we keep on polling RM for
+// containers
+// Keep looping until all the containers are launched and shell script
+// executed on them ( regardless of success/failure).
+for (int i = 0; i < numTotalContainersToRequest; ++i) {
+  ContainerRequest containerAsk = setupContainerAskForRM();
+  amRMClient.addContainerRequest(containerAsk);
+}
+```
+
+* In `setupContainerAskForRM()`, the follow two things need some set up:
+
+> * Resource capability: Currently, YARN supports memory based resource requirements so the request should define how much memory is needed. The value is defined in MB and has to less than the max capability of the cluster and an exact multiple of the min capability. Memory resources correspond to physical memory limits imposed on the task containers. It will also support computation based resource (vCore), as shown in the code.
+
+> * Priority: When asking for sets of containers, an AM may define different priorities to each set. For example, the Map-Reduce AM may assign a higher priority to containers needed for the Map tasks and a lower priority for the Reduce tasks' containers.
+
+```java
+private ContainerRequest setupContainerAskForRM() {
+  // setup requirements for hosts
+  // using * as any host will do for the distributed shell app
+  // set the priority for the request
+  Priority pri = Priority.newInstance(requestPriority);
+
+  // Set up resource type requirements
+  // For now, memory and CPU are supported so we set memory and cpu requirements
+  Resource capability = Resource.newInstance(containerMemory,
+    containerVirtualCores);
+
+  ContainerRequest request = new ContainerRequest(capability, null, null,
+      pri);
+  LOG.info("Requested container ask: " + request.toString());
+  return request;
+}
+```
+
+* After container allocation requests have been sent by the application manager, contailers will be launched asynchronously, by the event handler of the `AMRMClientAsync` client. The handler should implement `AMRMClientAsync.CallbackHandler` interface.
+
+> * When there are containers allocated, the handler sets up a thread that runs the code to launch containers. Here we use the name `LaunchContainerRunnable` to demonstrate. We will talk about the `LaunchContainerRunnable` class in the following part of this article.
+
+```java
+@Override
+public void onContainersAllocated(List<Container> allocatedContainers) {
+  LOG.info("Got response from RM for container ask, allocatedCnt="
+      + allocatedContainers.size());
+  numAllocatedContainers.addAndGet(allocatedContainers.size());
+  for (Container allocatedContainer : allocatedContainers) {
+    LaunchContainerRunnable runnableLaunchContainer =
+        new LaunchContainerRunnable(allocatedContainer, containerListener);
+    Thread launchThread = new Thread(runnableLaunchContainer);
+
+    // launch and start the container on a separate thread to keep
+    // the main thread unblocked
+    // as all containers may not be allocated at one go.
+    launchThreads.add(launchThread);
+    launchThread.start();
+  }
+}
+```
+
+> * On heart beat, the event handler reports the progress of the application.
+
+```java
+@Override
+public float getProgress() {
+  // set progress to deliver to RM on next heartbeat
+  float progress = (float) numCompletedContainers.get()
+      / numTotalContainers;
+  return progress;
+}
+```
+
+* The container launch thread actually launches the containers on NMs. After a container has been allocated to the AM, it needs to follow a similar process that the client followed in setting up the `ContainerLaunchContext` for the eventual task that is going to be running on the allocated Container. Once the `ContainerLaunchContext` is defined, the AM can start it through the `NMClientAsync`.
+
+```java
+// Set the necessary command to execute on the allocated container
+Vector<CharSequence> vargs = new Vector<CharSequence>(5);
+
+// Set executable command
+vargs.add(shellCommand);
+// Set shell script path
+if (!scriptPath.isEmpty()) {
+  vargs.add(Shell.WINDOWS ? ExecBatScripStringtPath
+    : ExecShellStringPath);
+}
+
+// Set args for the shell command if any
+vargs.add(shellArgs);
+// Add log redirect params
+vargs.add("1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout");
+vargs.add("2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr");
+
+// Get final commmand
+StringBuilder command = new StringBuilder();
+for (CharSequence str : vargs) {
+  command.append(str).append(" ");
+}
+
+List<String> commands = new ArrayList<String>();
+commands.add(command.toString());
+
+// Set up ContainerLaunchContext, setting local resource, environment,
+// command and token for constructor.
+
+// Note for tokens: Set up tokens for the container too. Today, for normal
+// shell commands, the container in distribute-shell doesn't need any
+// tokens. We are populating them mainly for NodeManagers to be able to
+// download anyfiles in the distributed file-system. The tokens are
+// otherwise also useful in cases, for e.g., when one is running a
+// "hadoop dfs" command inside the distributed shell.
+ContainerLaunchContext ctx = ContainerLaunchContext.newInstance(
+  localResources, shellEnv, commands, null, allTokens.duplicate(), null);
+containerListener.addContainer(container.getId(), container);
+nmClientAsync.startContainerAsync(container, ctx);
+```
+
+* The `NMClientAsync` object, together with its event handler, handles container events. Including container start, stop, status update, and occurs an error.
+
+* After the ApplicationMaster determines the work is done, it needs to unregister itself through the AM-RM client, and then stops the client.
+
+```java
+try {
+  amRMClient.unregisterApplicationMaster(appStatus, appMessage, null);
+} catch (YarnException ex) {
+  LOG.error("Failed to unregister application", ex);
+} catch (IOException e) {
+  LOG.error("Failed to unregister application", e);
+}
+
+amRMClient.stop();
+```
+
+FAQ
+---
+
+### How can I distribute my application's jars to all of the nodes in the YARN cluster that need it?
+
+You can use the LocalResource to add resources to your application request. This will cause YARN to distribute the resource to the ApplicationMaster node. If the resource is a tgz, zip, or jar - you can have YARN unzip it. Then, all you need to do is add the unzipped folder to your classpath. For example, when creating your application request:
+
+```java
+File packageFile = new File(packagePath);
+Url packageUrl = ConverterUtils.getYarnUrlFromPath(
+    FileContext.getFileContext.makeQualified(new Path(packagePath)));
+
+packageResource.setResource(packageUrl);
+packageResource.setSize(packageFile.length());
+packageResource.setTimestamp(packageFile.lastModified());
+packageResource.setType(LocalResourceType.ARCHIVE);
+packageResource.setVisibility(LocalResourceVisibility.APPLICATION);
+
+resource.setMemory(memory);
+containerCtx.setResource(resource);
+containerCtx.setCommands(ImmutableList.of(
+    "java -cp './package/*' some.class.to.Run "
+    + "1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout "
+    + "2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr"));
+containerCtx.setLocalResources(
+    Collections.singletonMap("package", packageResource));
+appCtx.setApplicationId(appId);
+appCtx.setUser(user.getShortUserName);
+appCtx.setAMContainerSpec(containerCtx);
+yarnClient.submitApplication(appCtx);
+```
+
+  As you can see, the `setLocalResources` command takes a map of names to resources. The name becomes a sym link in your application's cwd, so you can just refer to the artifacts inside by using ./package/\*.
+
+  **Note**: Java's classpath (cp) argument is VERY sensitive. Make sure you get the syntax EXACTLY correct.
+
+  Once your package is distributed to your AM, you'll need to follow the same process whenever your AM starts a new container (assuming you want the resources to be sent to your container). The code for this is the same. You just need to make sure that you give your AM the package path (either HDFS, or local), so that it can send the resource URL along with the container ctx.
+
+### How do I get the ApplicationMaster's `ApplicationAttemptId`?
+
+The `ApplicationAttemptId` will be passed to the AM via the environment and the value from the environment can be converted into an `ApplicationAttemptId` object via the ConverterUtils helper function.
+
+### Why my container is killed by the NodeManager?
+
+This is likely due to high memory usage exceeding your requested container memory size. There are a number of reasons that can cause this. First, look at the process tree that the NodeManager dumps when it kills your container. The two things you're interested in are physical memory and virtual memory. If you have exceeded physical memory limits your app is using too much physical memory. If you're running a Java app, you can use -hprof to look at what is taking up space in the heap. If you have exceeded virtual memory, you may need to increase the value of the the cluster-wide configuration variable `yarn.nodemanager.vmem-pmem-ratio`.
+
+### How do I include native libraries?
+
+Setting `-Djava.library.path` on the command line while launching a container can cause native libraries used by Hadoop to not be loaded correctly and can result in errors. It is cleaner to use `LD_LIBRARY_PATH` instead.
+
+Useful Links
+------------
+
+* [YARN Architecture](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html)
+
+* [YARN Capacity Scheduler](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html)
+
+* [YARN Fair Scheduler](http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html)
+
+Sample Code
+-----------
+
+Yarn distributed shell: in `hadoop-yarn-applications-distributedshell` project after you set up your development environment.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
new file mode 100644
index 0000000..f79272c
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YARN.md
@@ -0,0 +1,42 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Apache Hadoop NextGen MapReduce (YARN)
+==================
+
+MapReduce has undergone a complete overhaul in hadoop-0.23 and we now have, what we call, MapReduce 2.0 (MRv2) or YARN.
+
+The fundamental idea of MRv2 is to split up the two major functionalities of the JobTracker, resource management and job scheduling/monitoring, into separate daemons. The idea is to have a global ResourceManager (*RM*) and per-application ApplicationMaster (*AM*). An application is either a single job in the classical sense of Map-Reduce jobs or a DAG of jobs.
+
+The ResourceManager and per-node slave, the NodeManager (*NM*), form the data-computation framework. The ResourceManager is the ultimate authority that arbitrates resources among all the applications in the system.
+
+The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
+
+![MapReduce NextGen Architecture](./yarn_architecture.gif)
+
+The ResourceManager has two main components: Scheduler and ApplicationsManager.
+
+The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees about restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a resource *Container* which incorporates elements such as memory, cpu, disk, network etc. In the first version, only `memory` is supported.
+
+The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in.
+
+The CapacityScheduler supports `hierarchical queues` to allow for more predictable sharing of cluster resources
+
+The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure.
+
+The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler.
+
+The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress.
+
+MRV2 maintains **API compatibility** with previous stable release (hadoop-1.x). This means that all Map-Reduce jobs should still run unchanged on top of MRv2 with just a recompile.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
new file mode 100644
index 0000000..28bb678
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/YarnCommands.md
@@ -0,0 +1,272 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+YARN Commands
+=============
+
+* [Overview](#Overview)
+* [User Commands](#User_Commands)
+    * [application](#application)
+    * [applicationattempt](#applicationattempt)
+    * [classpath](#classpath)
+    * [container](#container)
+    * [jar](#jar)
+    * [logs](#logs)
+    * [node](#node)
+    * [queue](#queue)
+    * [version](#version)
+* [Administration Commands](#Administration_Commands)
+    * [daemonlog](#daemonlog)
+    * [nodemanager](#nodemanager)
+    * [proxyserver](#proxyserver)
+    * [resourcemanager](#resourcemanager)
+    * [rmadmin](#rmadmin)
+    * [scmadmin](#scmadmin)
+    * [sharedcachemanager](#sharedcachemanager)
+    * [timelineserver](#timelineserver)
+* [Files](#Files)
+    * [etc/hadoop/hadoop-env.sh](#etchadoophadoop-env.sh)
+    * [etc/hadoop/yarn-env.sh](#etchadoopyarn-env.sh)
+    * [etc/hadoop/hadoop-user-functions.sh](#etchadoophadoop-user-functions.sh)
+    * [~/.hadooprc](#a.hadooprc)
+
+Overview
+--------
+
+YARN commands are invoked by the bin/yarn script. Running the yarn script without any arguments prints the description for all commands.
+
+Usage: `yarn [SHELL_OPTIONS] COMMAND [GENERIC_OPTIONS] [COMMAND_OPTIONS]`
+
+YARN has an option parsing framework that employs parsing generic options as well as running classes.
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| SHELL\_OPTIONS | The common set of shell options. These are documented on the [Commands Manual](../../hadoop-project-dist/hadoop-common/CommandsManual.html#Shell_Options) page. |
+| GENERIC\_OPTIONS | The common set of options supported by multiple commands. See the Hadoop [Commands Manual](../../hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options) for more information. |
+| COMMAND COMMAND\_OPTIONS | Various commands with their options are described in the following sections. The commands have been grouped into [User Commands](#User_Commands) and [Administration Commands](#Administration_Commands). |
+
+User Commands
+-------------
+
+Commands useful for users of a Hadoop cluster.
+
+### `application`
+
+Usage: `yarn application [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -appStates States | Works with -list to filter applications based on input comma-separated list of application states. The valid application state can be one of the following:  ALL, NEW, NEW\_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED |
+| -appTypes Types | Works with -list to filter applications based on input comma-separated list of application types. |
+| -list | Lists applications from the RM. Supports optional use of -appTypes to filter applications based on application type, and -appStates to filter applications based on application state. |
+| -kill ApplicationId | Kills the application. |
+| -status ApplicationId | Prints the status of the application. |
+
+Prints application(s) report/kill application
+
+### `applicationattempt`
+
+Usage: `yarn applicationattempt [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -help | Help |
+| -list ApplicationId | Lists applications attempts from the RM |
+| -status Application Attempt Id | Prints the status of the application attempt. |
+
+prints applicationattempt(s) report
+
+### `classpath`
+
+Usage: `yarn classpath`
+
+Prints the class path needed to get the Hadoop jar and the required libraries
+
+### `container`
+
+Usage: `yarn container [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -help | Help |
+| -list ApplicationId | Lists containers for the application attempt. |
+| -status ContainerId | Prints the status of the container. |
+
+prints container(s) report
+
+### `jar`
+
+Usage: `yarn jar <jar> [mainClass] args... `
+
+Runs a jar file. Users can bundle their YARN code in a jar file and execute it using this command.
+
+### `logs`
+
+Usage: `yarn logs -applicationId <application ID> [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -applicationId \<application ID\> | Specifies an application id |
+| -appOwner AppOwner | AppOwner (assumed to be current user if not specified) |
+| -containerId ContainerId | ContainerId (must be specified if node address is specified) |
+| -help | Help |
+| -nodeAddress NodeAddress | NodeAddress in the format nodename:port (must be specified if container id is specified) |
+
+Dump the container logs
+
+### `node`
+
+Usage: `yarn node [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -all | Works with -list to list all nodes. |
+| -list | Lists all running nodes. Supports optional use of -states to filter nodes based on node state, and -all to list all nodes. |
+| -states States | Works with -list to filter nodes based on input comma-separated list of node states. |
+| -status NodeId | Prints the status report of the node. |
+
+Prints node report(s)
+
+### `queue`
+
+Usage: `yarn queue [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -help | Help |
+| -status QueueName | Prints the status of the queue. |
+
+Prints queue information
+
+### `version`
+
+Usage: `yarn version`
+
+Prints the Hadoop version.
+
+Administration Commands
+-----------------------
+
+Commands useful for administrators of a Hadoop cluster.
+
+### `daemonlog`
+
+Usage:
+
+```
+   yarn daemonlog -getlevel <host:httpport> <classname> 
+   yarn daemonlog -setlevel <host:httpport> <classname> <level>
+```
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -getlevel `<host:httpport>` `<classname>` | Prints the log level of the log identified by a qualified `<classname>`, in the daemon running at `<host:httpport>`. This command internally connects to `http://<host:httpport>/logLevel?log=<classname>` |
+| -setlevel `<host:httpport> <classname> <level>` | Sets the log level of the log identified by a qualified `<classname>` in the daemon running at `<host:httpport>`. This command internally connects to `http://<host:httpport>/logLevel?log=<classname>&level=<level>` |
+
+Get/Set the log level for a Log identified by a qualified class name in the daemon.
+
+Example: `$ bin/yarn daemonlog -setlevel 127.0.0.1:8088 org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl DEBUG`
+
+### `nodemanager`
+
+Usage: `yarn nodemanager`
+
+Start the NodeManager
+
+### `proxyserver`
+
+Usage: `yarn proxyserver`
+
+Start the web proxy server
+
+### `resourcemanager`
+
+Usage: `yarn resourcemanager [-format-state-store]`
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -format-state-store | Formats the RMStateStore. This will clear the RMStateStore and is useful if past applications are no longer needed. This should be run only when the ResourceManager is not running. |
+
+Start the ResourceManager
+
+### `rmadmin`
+
+Usage:
+
+```
+  yarn rmadmin [-refreshQueues]
+               [-refreshNodes]
+               [-refreshUserToGroupsMapping] 
+               [-refreshSuperUserGroupsConfiguration]
+               [-refreshAdminAcls] 
+               [-refreshServiceAcl]
+               [-getGroups [username]]
+               [-transitionToActive [--forceactive] [--forcemanual] <serviceId>]
+               [-transitionToStandby [--forcemanual] <serviceId>]
+               [-failover [--forcefence] [--forceactive] <serviceId1> <serviceId2>]
+               [-getServiceState <serviceId>]
+               [-checkHealth <serviceId>]
+               [-help [cmd]]
+```
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -refreshQueues | Reload the queues' acls, states and scheduler specific properties. ResourceManager will reload the mapred-queues configuration file. |
+| -refreshNodes | Refresh the hosts information at the ResourceManager. |
+| -refreshUserToGroupsMappings | Refresh user-to-groups mappings. |
+| -refreshSuperUserGroupsConfiguration | Refresh superuser proxy groups mappings. |
+| -refreshAdminAcls | Refresh acls for administration of ResourceManager |
+| -refreshServiceAcl | Reload the service-level authorization policy file ResourceManager will reload the authorization policy file. |
+| -getGroups [username] | Get groups the specified user belongs to. |
+| -transitionToActive [--forceactive] [--forcemanual] \<serviceId\> | Transitions the service into Active state. Try to make the target active without checking that there is no active node if the --forceactive option is used. This command can not be used if automatic failover is enabled. Though you can override this by --forcemanual option, you need caution. |
+| -transitionToStandby [--forcemanual] \<serviceId\> | Transitions the service into Standby state. This command can not be used if automatic failover is enabled. Though you can override this by --forcemanual option, you need caution. |
+| -failover [--forceactive] \<serviceId1\> \<serviceId2\> | Initiate a failover from serviceId1 to serviceId2. Try to failover to the target service even if it is not ready if the --forceactive option is used. This command can not be used if automatic failover is enabled. |
+| -getServiceState \<serviceId\> | Returns the state of the service. |
+| -checkHealth \<serviceId\> | Requests that the service perform a health check. The RMAdmin tool will exit with a non-zero exit code if the check fails. |
+| -help [cmd] | Displays help for the given command or all commands if none is specified. |
+
+Runs ResourceManager admin client
+
+### scmadmin
+
+Usage: `yarn scmadmin [options] `
+
+| COMMAND\_OPTIONS | Description |
+|:---- |:---- |
+| -help | Help |
+| -runCleanerTask | Runs the cleaner task |
+
+Runs Shared Cache Manager admin client
+
+### sharedcachemanager
+
+Usage: `yarn sharedcachemanager`
+
+Start the Shared Cache Manager
+
+### timelineserver
+
+Usage: `yarn timelineserver`
+
+Start the TimeLineServer
+
+Files
+-----
+
+| File | Description |
+|:---- |:---- |
+| etc/hadoop/hadoop-env.sh | This file stores the global settings used by all Hadoop shell commands. |
+| etc/hadoop/yarn-env.sh | This file stores overrides used by all YARN shell commands. |
+| etc/hadoop/hadoop-user-functions.sh | This file allows for advanced users to override some shell functionality. |
+| ~/.hadooprc | This stores the personal environment for an individual user. It is processed after the `hadoop-env.sh`, `hadoop-user-functions.sh`, and `yarn-env.sh` files and can contain the same settings. |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/index.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/index.md
new file mode 100644
index 0000000..9637ea0
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/index.md
@@ -0,0 +1,75 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+MapReduce NextGen aka YARN aka MRv2
+===================================
+
+The new architecture introduced in hadoop-0.23, divides the two major functions of the JobTracker: resource management and job life-cycle management into separate components.
+
+The new ResourceManager manages the global assignment of compute resources to applications and the per-application ApplicationMaster manages the application’s scheduling and coordination.
+
+An application is either a single job in the sense of classic MapReduce jobs or a DAG of such jobs.
+
+The ResourceManager and per-machine NodeManager daemon, which manages the user processes on that machine, form the computation fabric.
+
+The per-application ApplicationMaster is, in effect, a framework specific library and is tasked with negotiating resources from the ResourceManager and working with the NodeManager(s) to execute and monitor the tasks.
+
+More details are available in the [Architecture](./YARN.html) document.
+
+Documentation Index
+===================
+
+YARN
+----
+
+* [YARN Architecture](./YARN.html)
+
+* [Capacity Scheduler](./CapacityScheduler.html)
+
+* [Fair Scheduler](./FairScheduler.html)
+
+* [ResourceManager Restart](./ResourceManagerRestart.htaml)
+
+* [ResourceManager HA](./ResourceManagerHA.html)
+
+* [Web Application Proxy](./WebApplicationProxy.html)
+
+* [YARN Timeline Server](./TimelineServer.html)
+
+* [Writing YARN Applications](./WritingYarnApplications.html)
+
+* [YARN Commands](./YarnCommands.html)
+
+* [Scheduler Load Simulator](#hadoop-slsSchedulerLoadSimulator.html)
+
+* [NodeManager Restart](./NodeManagerRestart.html)
+
+* [DockerContainerExecutor](./DockerContainerExecutor.html)
+
+* [Using CGroups](./NodeManagerCGroups.html)
+
+* [Secure Containers](./SecureContainer.html)
+
+* [Registry](./registry/index.html)
+
+YARN REST APIs
+--------------
+
+* [Introduction](./WebServicesIntro.html)
+
+* [Resource Manager](./ResourceManagerRest.html)
+
+* [Node Manager](./NodeManagerRest.html)
+
+


[23/43] hadoop git commit: HDFS-7789. DFSck should resolve the path to support cross-FS symlinks. (gera)

Posted by zj...@apache.org.
HDFS-7789. DFSck should resolve the path to support cross-FS symlinks. (gera)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cbb49257
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cbb49257
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cbb49257

Branch: refs/heads/YARN-2928
Commit: cbb492578ef09300821b7199de54c6508f9d7fe8
Parents: 67ed593
Author: Gera Shegalov <ge...@apache.org>
Authored: Thu Feb 12 04:32:43 2015 -0800
Committer: Gera Shegalov <ge...@apache.org>
Committed: Mon Mar 2 00:55:35 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt     |  3 ++
 .../org/apache/hadoop/hdfs/tools/DFSck.java     | 31 +++++++++++++-------
 .../hadoop/hdfs/server/namenode/TestFsck.java   | 14 ++++++---
 .../namenode/TestFsckWithMultipleNameNodes.java | 20 +++++++++++++
 4 files changed, 53 insertions(+), 15 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbb49257/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 5ca16af..d5208da 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -697,6 +697,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-7439. Add BlockOpResponseProto's message to the exception messages.
     (Takanobu Asanuma via szetszwo)
 
+    HDFS-7789. DFSck should resolve the path to support cross-FS symlinks.
+    (gera)
+
   OPTIMIZATIONS
 
     HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbb49257/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
index ec83a90..dc6d9d4 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
@@ -225,6 +225,14 @@ public class DFSck extends Configured implements Tool {
     return errCode;
   }
   
+
+  private Path getResolvedPath(String dir) throws IOException {
+    Configuration conf = getConf();
+    Path dirPath = new Path(dir);
+    FileSystem fs = dirPath.getFileSystem(conf);
+    return fs.resolvePath(dirPath);
+  }
+
   /**
    * Derive the namenode http address from the current file system,
    * either default or as set by "-fs" in the generic options.
@@ -236,19 +244,12 @@ public class DFSck extends Configured implements Tool {
     Configuration conf = getConf();
 
     //get the filesystem object to verify it is an HDFS system
-    final FileSystem fs;
-    try {
-      fs = target.getFileSystem(conf);
-    } catch (IOException ioe) {
-      System.err.println("FileSystem is inaccessible due to:\n"
-          + StringUtils.stringifyException(ioe));
-      return null;
-    }
+    final FileSystem fs = target.getFileSystem(conf);
     if (!(fs instanceof DistributedFileSystem)) {
       System.err.println("FileSystem is " + fs.getUri());
       return null;
     }
-    
+
     return DFSUtil.getInfoServer(HAUtil.getAddressOfActive(fs), conf,
         DFSUtil.getHttpClientScheme(conf));
   }
@@ -303,8 +304,16 @@ public class DFSck extends Configured implements Tool {
       dir = "/";
     }
 
-    final Path dirpath = new Path(dir);
-    final URI namenodeAddress = getCurrentNamenodeAddress(dirpath);
+    Path dirpath = null;
+    URI namenodeAddress = null;
+    try {
+      dirpath = getResolvedPath(dir);
+      namenodeAddress = getCurrentNamenodeAddress(dirpath);
+    } catch (IOException ioe) {
+      System.err.println("FileSystem is inaccessible due to:\n"
+          + StringUtils.stringifyException(ioe));
+    }
+
     if (namenodeAddress == null) {
       //Error message already output in {@link #getCurrentNamenodeAddress()}
       System.err.println("DFSck exiting.");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbb49257/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
index 1053b5f..409fffc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsck.java
@@ -211,10 +211,16 @@ public class TestFsck {
     try {
       // Audit log should contain one getfileinfo and one fsck
       reader = new BufferedReader(new FileReader(auditLogFile));
-      String line = reader.readLine();
-      assertNotNull(line);
-      assertTrue("Expected getfileinfo event not found in audit log",
-          getfileinfoPattern.matcher(line).matches());
+      String line;
+
+      // one extra getfileinfo stems from resolving the path
+      //
+      for (int i = 0; i < 2; i++) {
+        line = reader.readLine();
+        assertNotNull(line);
+        assertTrue("Expected getfileinfo event not found in audit log",
+            getfileinfoPattern.matcher(line).matches());
+      }
       line = reader.readLine();
       assertNotNull(line);
       assertTrue("Expected fsck event not found in audit log", fsckPattern

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cbb49257/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsckWithMultipleNameNodes.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsckWithMultipleNameNodes.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsckWithMultipleNameNodes.java
index f4cb624..124b301 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsckWithMultipleNameNodes.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsckWithMultipleNameNodes.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.server.namenode;
 
 import java.io.IOException;
+import java.net.URI;
 import java.util.Random;
 import java.util.concurrent.TimeoutException;
 
@@ -26,6 +27,8 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.viewfs.ConfigUtil;
+import org.apache.hadoop.fs.viewfs.ViewFileSystem;
 import org.apache.hadoop.hdfs.DFSConfigKeys;
 import org.apache.hadoop.hdfs.DFSTestUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
@@ -119,6 +122,23 @@ public class TestFsckWithMultipleNameNodes {
         LOG.info("result=" + result);
         Assert.assertTrue(result.contains("Status: HEALTHY"));
       }
+
+      // Test viewfs
+      //
+      LOG.info("RUN_TEST 3");
+      final String[] vurls = new String[nNameNodes];
+      for (int i = 0; i < vurls.length; i++) {
+        String link = "/mount/nn_" + i + FILE_NAME;
+        ConfigUtil.addLink(conf, link, new URI(urls[i]));
+        vurls[i] = "viewfs:" + link;
+      }
+
+      for(int i = 0; i < vurls.length; i++) {
+        LOG.info("vurls[" + i + "]=" + vurls[i]);
+        final String result = TestFsck.runFsck(conf, 0, false, vurls[i]);
+        LOG.info("result=" + result);
+        Assert.assertTrue(result.contains("Status: HEALTHY"));
+      }
     } finally {
       cluster.shutdown();
     }


[10/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
deleted file mode 100644
index 69728fb..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRest.apt.vm
+++ /dev/null
@@ -1,3104 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  ResourceManager REST API's.
-  ---
-  ---
-  ${maven.build.timestamp}
-
-ResourceManager REST API's.
-
-%{toc|section=1|fromDepth=0|toDepth=2}
-
-* Overview
-
-  The ResourceManager REST API's allow the user to get information about the cluster - status on the cluster, metrics on the cluster, scheduler information, information about nodes in the cluster, and information about applications on the cluster.
-  
-* Cluster Information API
-
-  The cluster information resource provides overall information about the cluster. 
-
-** URI
-
-  Both of the following URI's give you the cluster information.
-
-------
-  * http://<rm http address:port>/ws/v1/cluster
-  * http://<rm http address:port>/ws/v1/cluster/info
-------
-
-** HTTP Operations Supported
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <clusterInfo> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| id            | long         | The cluster id |
-*---------------+--------------+-------------------------------+
-| startedOn     | long         | The time the cluster started (in ms since epoch)|
-*---------------+--------------+-------------------------------+
-| state         | string | The ResourceManager state - valid values are: NOTINITED, INITED, STARTED, STOPPED|
-*---------------+--------------+-------------------------------+
-| haState       | string | The ResourceManager HA state - valid values are: INITIALIZING, ACTIVE, STANDBY, STOPPED|
-*---------------+--------------+-------------------------------+
-| resourceManagerVersion | string  | Version of the ResourceManager |
-*---------------+--------------+-------------------------------+
-| resourceManagerBuildVersion | string  | ResourceManager build string with build version, user, and checksum |
-*---------------+--------------+-------------------------------+
-| resourceManagerVersionBuiltOn | string  | Timestamp when ResourceManager was built (in ms since epoch)|
-*---------------+--------------+-------------------------------+
-| hadoopVersion | string  | Version of hadoop common |
-*---------------+--------------+-------------------------------+
-| hadoopBuildVersion | string  | Hadoop common build string with build version, user, and checksum |
-*---------------+--------------+-------------------------------+
-| hadoopVersionBuiltOn | string  | Timestamp when hadoop common was built(in ms since epoch)|
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/info
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "clusterInfo":
-  {
-    "id":1324053971963,
-    "startedOn":1324053971963,
-    "state":"STARTED",
-    "resourceManagerVersion":"0.23.1-SNAPSHOT",
-    "resourceManagerBuildVersion":"0.23.1-SNAPSHOT from 1214049 by user1 source checksum 050cd664439d931c8743a6428fd6a693",
-    "resourceManagerVersionBuiltOn":"Tue Dec 13 22:12:48 CST 2011",
-    "hadoopVersion":"0.23.1-SNAPSHOT",
-    "hadoopBuildVersion":"0.23.1-SNAPSHOT from 1214049 by user1 source checksum 11458df3bb77342dca5f917198fad328",
-    "hadoopVersionBuiltOn":"Tue Dec 13 22:12:26 CST 2011"
-  }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
------
-  Accept: application/xml
-  GET http://<rm http address:port>/ws/v1/cluster/info
------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 712
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<clusterInfo>
-  <id>1324053971963</id>
-  <startedOn>1324053971963</startedOn>
-  <state>STARTED</state>
-  <resourceManagerVersion>0.23.1-SNAPSHOT</resourceManagerVersion>
-  <resourceManagerBuildVersion>0.23.1-SNAPSHOT from 1214049 by user1 source checksum 050cd664439d931c8743a6428fd6a693</resourceManagerBuildVersion>
-  <resourceManagerVersionBuiltOn>Tue Dec 13 22:12:48 CST 2011</resourceManagerVersionBuiltOn>
-  <hadoopVersion>0.23.1-SNAPSHOT</hadoopVersion>
-  <hadoopBuildVersion>0.23.1-SNAPSHOT from 1214049 by user1 source checksum 11458df3bb77342dca5f917198fad328</hadoopBuildVersion>
-  <hadoopVersionBuiltOn>Tue Dec 13 22:12:48 CST 2011</hadoopVersionBuiltOn>
-</clusterInfo>
-+---+
-
-* Cluster Metrics API
-
-  The cluster metrics resource provides some overall metrics about the cluster. More detailed metrics should be retrieved from the jmx interface.
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/metrics
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <clusterMetrics> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type  || Description                   |
-*---------------+--------------+-------------------------------+
-| appsSubmitted | int | The number of applications submitted |
-*---------------+--------------+-------------------------------+
-| appsCompleted | int | The number of applications completed |
-*---------------+--------------+-------------------------------+
-| appsPending | int | The number of applications pending |
-*---------------+--------------+-------------------------------+
-| appsRunning | int | The number of applications running |
-*---------------+--------------+-------------------------------+
-| appsFailed | int | The number of applications failed |
-*---------------+--------------+-------------------------------+
-| appsKilled | int | The number of applications killed |
-*---------------+--------------+-------------------------------+
-| reservedMB    | long         | The amount of memory reserved in MB |
-*---------------+--------------+-------------------------------+
-| availableMB   | long         | The amount of memory available in MB |
-*---------------+--------------+-------------------------------+
-| allocatedMB   | long         | The amount of memory allocated in MB |
-*---------------+--------------+-------------------------------+
-| totalMB       | long         | The amount of total memory in MB |
-*---------------+--------------+-------------------------------+
-| reservedVirtualCores    | long         | The number of reserved virtual cores |
-*---------------+--------------+-------------------------------+
-| availableVirtualCores   | long         | The number of available virtual cores |
-*---------------+--------------+-------------------------------+
-| allocatedVirtualCores   | long         | The number of allocated virtual cores |
-*---------------+--------------+-------------------------------+
-| totalVirtualCores       | long         | The total number of virtual cores |
-*---------------+--------------+-------------------------------+
-| containersAllocated | int | The number of containers allocated |
-*---------------+--------------+-------------------------------+
-| containersReserved | int | The number of containers reserved |
-*---------------+--------------+-------------------------------+
-| containersPending | int | The number of containers pending |
-*---------------+--------------+-------------------------------+
-| totalNodes | int | The total number of nodes |
-*---------------+--------------+-------------------------------+
-| activeNodes | int | The number of active nodes |
-*---------------+--------------+-------------------------------+
-| lostNodes | int | The number of lost nodes |
-*---------------+--------------+-------------------------------+
-| unhealthyNodes | int | The number of unhealthy nodes |
-*---------------+--------------+-------------------------------+
-| decommissionedNodes | int | The number of nodes decommissioned |
-*---------------+--------------+-------------------------------+
-| rebootedNodes | int | The number of nodes rebooted |
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/metrics
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-  {
-    "clusterMetrics":{
-      "appsSubmitted":0,
-      "appsCompleted":0,
-      "appsPending":0,
-      "appsRunning":0,
-      "appsFailed":0,
-      "appsKilled":0,
-      "reservedMB":0,
-      "availableMB":17408,
-      "allocatedMB":0,
-      "reservedVirtualCores":0,
-      "availableVirtualCores":7,
-      "allocatedVirtualCores":1,
-      "containersAllocated":0,
-      "containersReserved":0,
-      "containersPending":0,
-      "totalMB":17408,
-      "totalVirtualCores":8,
-      "totalNodes":1,
-      "lostNodes":0,
-      "unhealthyNodes":0,
-      "decommissionedNodes":0,
-      "rebootedNodes":0,
-      "activeNodes":1
-    }
-  }
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/metrics
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 432
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<clusterMetrics>
-  <appsSubmitted>0</appsSubmitted>
-  <appsCompleted>0</appsCompleted>
-  <appsPending>0</appsPending>
-  <appsRunning>0</appsRunning>
-  <appsFailed>0</appsFailed>
-  <appsKilled>0</appsKilled>
-  <reservedMB>0</reservedMB>
-  <availableMB>17408</availableMB>
-  <allocatedMB>0</allocatedMB>
-  <reservedVirtualCores>0</reservedVirtualCores>
-  <availableVirtualCores>7</availableVirtualCores>
-  <allocatedVirtualCores>1</allocatedVirtualCores>
-  <containersAllocated>0</containersAllocated>
-  <containersReserved>0</containersReserved>
-  <containersPending>0</containersPending>
-  <totalMB>17408</totalMB>
-  <totalVirtualCores>8</totalVirtualCores>
-  <totalNodes>1</totalNodes>
-  <lostNodes>0</lostNodes>
-  <unhealthyNodes>0</unhealthyNodes>
-  <decommissionedNodes>0</decommissionedNodes>
-  <rebootedNodes>0</rebootedNodes>
-  <activeNodes>1</activeNodes>
-</clusterMetrics>
-+---+
-
-* Cluster Scheduler API
-
-  A scheduler resource contains information about the current scheduler configured in a cluster. It currently supports both the Fifo and Capacity Scheduler. You will get different information depending on which scheduler is configured so be sure to look at the type information.
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/scheduler
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Capacity Scheduler API
-
-  The capacity scheduler supports hierarchical queues. This one request will print information about all the queues and any subqueues they have.
-  Queues that can actually have jobs submitted to them are referred to as leaf queues. These queues have additional data associated with them.
-
-** Elements of the <schedulerInfo> object
-
-*---------------+--------------+-------------------------------+
-|| Item          || Data Type  || Description                   |
-*---------------+--------------+-------------------------------+
-| type | string | Scheduler type - capacityScheduler|
-*---------------+--------------+-------------------------------+
-| capacity | float | Configured queue capacity in percentage relative to its parent queue |
-*---------------+--------------+-------------------------------+
-| usedCapacity | float | Used queue capacity in percentage |
-*---------------+--------------+-------------------------------+
-| maxCapacity | float | Configured maximum queue capacity in percentage relative to its parent queue|
-*---------------+--------------+-------------------------------+
-| queueName | string | Name of the queue |
-*---------------+--------------+-------------------------------+
-| queues | array of queues(JSON)/zero or more queue objects(XML) | A collection of queue resources|
-*---------------+--------------+-------------------------------+
-
-** Elements of the queues object for a Parent queue
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| capacity | float | Configured queue capacity in percentage relative to its parent queue |
-*---------------+--------------+-------------------------------+
-| usedCapacity | float | Used queue capacity in percentage |
-*---------------+--------------+-------------------------------+
-| maxCapacity | float | Configured maximum queue capacity in percentage relative to its parent queue |
-*---------------+--------------+-------------------------------+
-| absoluteCapacity | float | Absolute capacity percentage this queue can use of entire cluster | 
-*---------------+--------------+-------------------------------+
-| absoluteMaxCapacity | float | Absolute maximum capacity percentage this queue can use of the entire cluster | 
-*---------------+--------------+-------------------------------+
-| absoluteUsedCapacity | float | Absolute used capacity percentage this queue is using of the entire cluster |
-*---------------+--------------+-------------------------------+
-| numApplications | int | The number of applications currently in the queue |
-*---------------+--------------+-------------------------------+
-| usedResources | string | A string describing the current resources used by the queue |
-*---------------+--------------+-------------------------------+
-| queueName | string | The name of the queue |
-*---------------+--------------+-------------------------------+
-| state | string of QueueState | The state of the queue |
-*---------------+--------------+-------------------------------+
-| queues | array of queues(JSON)/zero or more queue objects(XML) | A collection of sub-queue information|
-*---------------+--------------+-------------------------------+
-| resourcesUsed | A single resource object | The total amount of resources used by this queue |
-*---------------+--------------+-------------------------------+
-
-** Elements of the queues object for a Leaf queue - contains all elements in parent plus the following:
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| type | String | type of the queue - capacitySchedulerLeafQueueInfo |
-*---------------+--------------+-------------------------------+
-| numActiveApplications | int | The number of active applications in this queue |
-*---------------+--------------+-------------------------------+
-| numPendingApplications | int | The number of pending applications in this queue |
-*---------------+--------------+-------------------------------+
-| numContainers | int | The number of containers being used |
-*---------------+--------------+-------------------------------+
-| maxApplications | int | The maximum number of applications this queue can have |
-*---------------+--------------+-------------------------------+
-| maxApplicationsPerUser | int | The maximum number of applications per user this queue can have |
-*---------------+--------------+-------------------------------+
-| maxActiveApplications | int | The maximum number of active applications this queue can have |
-*---------------+--------------+-------------------------------+
-| maxActiveApplicationsPerUser | int | The maximum number of active applications per user this queue can have|
-*---------------+--------------+-------------------------------+
-| userLimit | int | The minimum user limit percent set in the configuration |
-*---------------+--------------+-------------------------------+
-| userLimitFactor | float | The user limit factor set in the configuration |
-*---------------+--------------+-------------------------------+
-| users | array of users(JSON)/zero or more user objects(XML) | A collection of user objects containing resources used |
-*---------------+--------------+-------------------------------+
-
-** Elements of the user object for users:
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| username | String | The username of the user using the resources |
-*---------------+--------------+-------------------------------+
-| resourcesUsed | A single resource object | The amount of resources used by the user in this queue |
-*---------------+--------------+-------------------------------+
-| numActiveApplications | int | The number of active applications for this user in this queue |
-*---------------+--------------+-------------------------------+
-| numPendingApplications | int | The number of pending applications for this user in this queue |
-*---------------+--------------+-------------------------------+
-
-** Elements of the resource object for resourcesUsed in user and queues:
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| memory | int | The amount of memory used (in MB) |
-*---------------+--------------+-------------------------------+
-| vCores | int | The number of virtual cores |
-*---------------+--------------+-------------------------------+
-
-*** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/scheduler
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-    "scheduler": {
-        "schedulerInfo": {
-            "capacity": 100.0, 
-            "maxCapacity": 100.0, 
-            "queueName": "root", 
-            "queues": {
-                "queue": [
-                    {
-                        "absoluteCapacity": 10.5, 
-                        "absoluteMaxCapacity": 50.0, 
-                        "absoluteUsedCapacity": 0.0, 
-                        "capacity": 10.5, 
-                        "maxCapacity": 50.0, 
-                        "numApplications": 0, 
-                        "queueName": "a", 
-                        "queues": {
-                            "queue": [
-                                {
-                                    "absoluteCapacity": 3.15, 
-                                    "absoluteMaxCapacity": 25.0, 
-                                    "absoluteUsedCapacity": 0.0, 
-                                    "capacity": 30.000002, 
-                                    "maxCapacity": 50.0, 
-                                    "numApplications": 0, 
-                                    "queueName": "a1", 
-                                    "queues": {
-                                        "queue": [
-                                            {
-                                                "absoluteCapacity": 2.6775, 
-                                                "absoluteMaxCapacity": 25.0, 
-                                                "absoluteUsedCapacity": 0.0, 
-                                                "capacity": 85.0, 
-                                                "maxActiveApplications": 1, 
-                                                "maxActiveApplicationsPerUser": 1, 
-                                                "maxApplications": 267, 
-                                                "maxApplicationsPerUser": 267, 
-                                                "maxCapacity": 100.0, 
-                                                "numActiveApplications": 0, 
-                                                "numApplications": 0, 
-                                                "numContainers": 0, 
-                                                "numPendingApplications": 0, 
-                                                "queueName": "a1a", 
-                                                "resourcesUsed": {
-                                                    "memory": 0, 
-                                                    "vCores": 0
-                                                }, 
-                                                "state": "RUNNING", 
-                                                "type": "capacitySchedulerLeafQueueInfo", 
-                                                "usedCapacity": 0.0, 
-                                                "usedResources": "<memory:0, vCores:0>", 
-                                                "userLimit": 100, 
-                                                "userLimitFactor": 1.0, 
-                                                "users": null
-                                            }, 
-                                            {
-                                                "absoluteCapacity": 0.47250003, 
-                                                "absoluteMaxCapacity": 25.0, 
-                                                "absoluteUsedCapacity": 0.0, 
-                                                "capacity": 15.000001, 
-                                                "maxActiveApplications": 1, 
-                                                "maxActiveApplicationsPerUser": 1, 
-                                                "maxApplications": 47, 
-                                                "maxApplicationsPerUser": 47, 
-                                                "maxCapacity": 100.0, 
-                                                "numActiveApplications": 0, 
-                                                "numApplications": 0, 
-                                                "numContainers": 0, 
-                                                "numPendingApplications": 0, 
-                                                "queueName": "a1b", 
-                                                "resourcesUsed": {
-                                                    "memory": 0, 
-                                                    "vCores": 0
-                                                }, 
-                                                "state": "RUNNING", 
-                                                "type": "capacitySchedulerLeafQueueInfo", 
-                                                "usedCapacity": 0.0, 
-                                                "usedResources": "<memory:0, vCores:0>", 
-                                                "userLimit": 100, 
-                                                "userLimitFactor": 1.0, 
-                                                "users": null
-                                            }
-                                        ]
-                                    }, 
-                                    "resourcesUsed": {
-                                        "memory": 0, 
-                                        "vCores": 0
-                                    }, 
-                                    "state": "RUNNING", 
-                                    "usedCapacity": 0.0, 
-                                    "usedResources": "<memory:0, vCores:0>"
-                                }, 
-                                {
-                                    "absoluteCapacity": 7.35, 
-                                    "absoluteMaxCapacity": 50.0, 
-                                    "absoluteUsedCapacity": 0.0, 
-                                    "capacity": 70.0, 
-                                    "maxActiveApplications": 1, 
-                                    "maxActiveApplicationsPerUser": 100, 
-                                    "maxApplications": 735, 
-                                    "maxApplicationsPerUser": 73500, 
-                                    "maxCapacity": 100.0, 
-                                    "numActiveApplications": 0, 
-                                    "numApplications": 0, 
-                                    "numContainers": 0, 
-                                    "numPendingApplications": 0, 
-                                    "queueName": "a2", 
-                                    "resourcesUsed": {
-                                        "memory": 0, 
-                                        "vCores": 0
-                                    }, 
-                                    "state": "RUNNING", 
-                                    "type": "capacitySchedulerLeafQueueInfo", 
-                                    "usedCapacity": 0.0, 
-                                    "usedResources": "<memory:0, vCores:0>", 
-                                    "userLimit": 100, 
-                                    "userLimitFactor": 100.0, 
-                                    "users": null
-                                }
-                            ]
-                        }, 
-                        "resourcesUsed": {
-                            "memory": 0, 
-                            "vCores": 0
-                        }, 
-                        "state": "RUNNING", 
-                        "usedCapacity": 0.0, 
-                        "usedResources": "<memory:0, vCores:0>"
-                    }, 
-                    {
-                        "absoluteCapacity": 89.5, 
-                        "absoluteMaxCapacity": 100.0, 
-                        "absoluteUsedCapacity": 0.0, 
-                        "capacity": 89.5, 
-                        "maxCapacity": 100.0, 
-                        "numApplications": 2, 
-                        "queueName": "b", 
-                        "queues": {
-                            "queue": [
-                                {
-                                    "absoluteCapacity": 53.7, 
-                                    "absoluteMaxCapacity": 100.0, 
-                                    "absoluteUsedCapacity": 0.0, 
-                                    "capacity": 60.000004, 
-                                    "maxActiveApplications": 1, 
-                                    "maxActiveApplicationsPerUser": 100, 
-                                    "maxApplications": 5370, 
-                                    "maxApplicationsPerUser": 537000, 
-                                    "maxCapacity": 100.0, 
-                                    "numActiveApplications": 1, 
-                                    "numApplications": 2, 
-                                    "numContainers": 0, 
-                                    "numPendingApplications": 1, 
-                                    "queueName": "b1", 
-                                    "resourcesUsed": {
-                                        "memory": 0, 
-                                        "vCores": 0
-                                    }, 
-                                    "state": "RUNNING", 
-                                    "type": "capacitySchedulerLeafQueueInfo", 
-                                    "usedCapacity": 0.0, 
-                                    "usedResources": "<memory:0, vCores:0>", 
-                                    "userLimit": 100, 
-                                    "userLimitFactor": 100.0, 
-                                    "users": {
-                                        "user": [
-                                            {
-                                                "numActiveApplications": 0, 
-                                                "numPendingApplications": 1, 
-                                                "resourcesUsed": {
-                                                    "memory": 0, 
-                                                    "vCores": 0
-                                                }, 
-                                                "username": "user2"
-                                            }, 
-                                            {
-                                                "numActiveApplications": 1, 
-                                                "numPendingApplications": 0, 
-                                                "resourcesUsed": {
-                                                    "memory": 0, 
-                                                    "vCores": 0
-                                                }, 
-                                                "username": "user1"
-                                            }
-                                        ]
-                                    }
-                                }, 
-                                {
-                                    "absoluteCapacity": 35.3525, 
-                                    "absoluteMaxCapacity": 100.0, 
-                                    "absoluteUsedCapacity": 0.0, 
-                                    "capacity": 39.5, 
-                                    "maxActiveApplications": 1, 
-                                    "maxActiveApplicationsPerUser": 100, 
-                                    "maxApplications": 3535, 
-                                    "maxApplicationsPerUser": 353500, 
-                                    "maxCapacity": 100.0, 
-                                    "numActiveApplications": 0, 
-                                    "numApplications": 0, 
-                                    "numContainers": 0, 
-                                    "numPendingApplications": 0, 
-                                    "queueName": "b2", 
-                                    "resourcesUsed": {
-                                        "memory": 0, 
-                                        "vCores": 0
-                                    }, 
-                                    "state": "RUNNING", 
-                                    "type": "capacitySchedulerLeafQueueInfo", 
-                                    "usedCapacity": 0.0, 
-                                    "usedResources": "<memory:0, vCores:0>", 
-                                    "userLimit": 100, 
-                                    "userLimitFactor": 100.0, 
-                                    "users": null
-                                }, 
-                                {
-                                    "absoluteCapacity": 0.4475, 
-                                    "absoluteMaxCapacity": 100.0, 
-                                    "absoluteUsedCapacity": 0.0, 
-                                    "capacity": 0.5, 
-                                    "maxActiveApplications": 1, 
-                                    "maxActiveApplicationsPerUser": 100, 
-                                    "maxApplications": 44, 
-                                    "maxApplicationsPerUser": 4400, 
-                                    "maxCapacity": 100.0, 
-                                    "numActiveApplications": 0, 
-                                    "numApplications": 0, 
-                                    "numContainers": 0, 
-                                    "numPendingApplications": 0, 
-                                    "queueName": "b3", 
-                                    "resourcesUsed": {
-                                        "memory": 0, 
-                                        "vCores": 0
-                                    }, 
-                                    "state": "RUNNING", 
-                                    "type": "capacitySchedulerLeafQueueInfo", 
-                                    "usedCapacity": 0.0, 
-                                    "usedResources": "<memory:0, vCores:0>", 
-                                    "userLimit": 100, 
-                                    "userLimitFactor": 100.0, 
-                                    "users": null
-                                }
-                            ]
-                        }, 
-                        "resourcesUsed": {
-                            "memory": 0, 
-                            "vCores": 0
-                        }, 
-                        "state": "RUNNING", 
-                        "usedCapacity": 0.0, 
-                        "usedResources": "<memory:0, vCores:0>"
-                    }
-                ]
-            }, 
-            "type": "capacityScheduler", 
-            "usedCapacity": 0.0
-        }
-    }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
------
-  Accept: application/xml
-  GET http://<rm http address:port>/ws/v1/cluster/scheduler
------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 5778
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<scheduler>
-  <schedulerInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="capacityScheduler">
-    <capacity>100.0</capacity>
-    <usedCapacity>0.0</usedCapacity>
-    <maxCapacity>100.0</maxCapacity>
-    <queueName>root</queueName>
-    <queues>
-      <queue>
-        <capacity>10.5</capacity>
-        <usedCapacity>0.0</usedCapacity>
-        <maxCapacity>50.0</maxCapacity>
-        <absoluteCapacity>10.5</absoluteCapacity>
-        <absoluteMaxCapacity>50.0</absoluteMaxCapacity>
-        <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-        <numApplications>0</numApplications>
-        <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-        <queueName>a</queueName>
-        <state>RUNNING</state>
-        <queues>
-          <queue>
-            <capacity>30.000002</capacity>
-            <usedCapacity>0.0</usedCapacity>
-            <maxCapacity>50.0</maxCapacity>
-            <absoluteCapacity>3.15</absoluteCapacity>
-            <absoluteMaxCapacity>25.0</absoluteMaxCapacity>
-            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-            <numApplications>0</numApplications>
-            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-            <queueName>a1</queueName>
-            <state>RUNNING</state>
-            <queues>
-              <queue xsi:type="capacitySchedulerLeafQueueInfo">
-                <capacity>85.0</capacity>
-                <usedCapacity>0.0</usedCapacity>
-                <maxCapacity>100.0</maxCapacity>
-                <absoluteCapacity>2.6775</absoluteCapacity>
-                <absoluteMaxCapacity>25.0</absoluteMaxCapacity>
-                <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-                <numApplications>0</numApplications>
-                <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-                <queueName>a1a</queueName>
-                <state>RUNNING</state>
-                <resourcesUsed>
-                  <memory>0</memory>
-                  <vCores>0</vCores>
-                </resourcesUsed>
-                <numActiveApplications>0</numActiveApplications>
-                <numPendingApplications>0</numPendingApplications>
-                <numContainers>0</numContainers>
-                <maxApplications>267</maxApplications>
-                <maxApplicationsPerUser>267</maxApplicationsPerUser>
-                <maxActiveApplications>1</maxActiveApplications>
-                <maxActiveApplicationsPerUser>1</maxActiveApplicationsPerUser>
-                <userLimit>100</userLimit>
-                <users/>
-                <userLimitFactor>1.0</userLimitFactor>
-              </queue>
-              <queue xsi:type="capacitySchedulerLeafQueueInfo">
-                <capacity>15.000001</capacity>
-                <usedCapacity>0.0</usedCapacity>
-                <maxCapacity>100.0</maxCapacity>
-                <absoluteCapacity>0.47250003</absoluteCapacity>
-                <absoluteMaxCapacity>25.0</absoluteMaxCapacity>
-                <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-                <numApplications>0</numApplications>
-                <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-                <queueName>a1b</queueName>
-                <state>RUNNING</state>
-                <resourcesUsed>
-                  <memory>0</memory>
-                  <vCores>0</vCores>
-                </resourcesUsed>
-                <numActiveApplications>0</numActiveApplications>
-                <numPendingApplications>0</numPendingApplications>
-                <numContainers>0</numContainers>
-                <maxApplications>47</maxApplications>
-                <maxApplicationsPerUser>47</maxApplicationsPerUser>
-                <maxActiveApplications>1</maxActiveApplications>
-                <maxActiveApplicationsPerUser>1</maxActiveApplicationsPerUser>
-                <userLimit>100</userLimit>
-                <users/>
-                <userLimitFactor>1.0</userLimitFactor>
-              </queue>
-            </queues>
-            <resourcesUsed>
-              <memory>0</memory>
-              <vCores>0</vCores>
-            </resourcesUsed>
-          </queue>
-          <queue xsi:type="capacitySchedulerLeafQueueInfo">
-            <capacity>70.0</capacity>
-            <usedCapacity>0.0</usedCapacity>
-            <maxCapacity>100.0</maxCapacity>
-            <absoluteCapacity>7.35</absoluteCapacity>
-            <absoluteMaxCapacity>50.0</absoluteMaxCapacity>
-            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-            <numApplications>0</numApplications>
-            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-            <queueName>a2</queueName>
-            <state>RUNNING</state>
-            <resourcesUsed>
-              <memory>0</memory>
-              <vCores>0</vCores>
-            </resourcesUsed>
-            <numActiveApplications>0</numActiveApplications>
-            <numPendingApplications>0</numPendingApplications>
-            <numContainers>0</numContainers>
-            <maxApplications>735</maxApplications>
-            <maxApplicationsPerUser>73500</maxApplicationsPerUser>
-            <maxActiveApplications>1</maxActiveApplications>
-            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
-            <userLimit>100</userLimit>
-            <users/>
-            <userLimitFactor>100.0</userLimitFactor>
-          </queue>
-        </queues>
-        <resourcesUsed>
-          <memory>0</memory>
-          <vCores>0</vCores>
-        </resourcesUsed>
-      </queue>
-      <queue>
-        <capacity>89.5</capacity>
-        <usedCapacity>0.0</usedCapacity>
-        <maxCapacity>100.0</maxCapacity>
-        <absoluteCapacity>89.5</absoluteCapacity>
-        <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
-        <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-        <numApplications>2</numApplications>
-        <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-        <queueName>b</queueName>
-        <state>RUNNING</state>
-        <queues>
-          <queue xsi:type="capacitySchedulerLeafQueueInfo">
-            <capacity>60.000004</capacity>
-            <usedCapacity>0.0</usedCapacity>
-            <maxCapacity>100.0</maxCapacity>
-            <absoluteCapacity>53.7</absoluteCapacity>
-            <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
-            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-            <numApplications>2</numApplications>
-            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-            <queueName>b1</queueName>
-            <state>RUNNING</state>
-            <resourcesUsed>
-              <memory>0</memory>
-              <vCores>0</vCores>
-            </resourcesUsed>
-            <numActiveApplications>1</numActiveApplications>
-            <numPendingApplications>1</numPendingApplications>
-            <numContainers>0</numContainers>
-            <maxApplications>5370</maxApplications>
-            <maxApplicationsPerUser>537000</maxApplicationsPerUser>
-            <maxActiveApplications>1</maxActiveApplications>
-            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
-            <userLimit>100</userLimit>
-            <users>
-              <user>
-                <username>user2</username>
-                <resourcesUsed>
-                  <memory>0</memory>
-                  <vCores>0</vCores>
-                </resourcesUsed>
-                <numPendingApplications>1</numPendingApplications>
-                <numActiveApplications>0</numActiveApplications>
-              </user>
-              <user>
-                <username>user1</username>
-                <resourcesUsed>
-                  <memory>0</memory>
-                  <vCores>0</vCores>
-                </resourcesUsed>
-                <numPendingApplications>0</numPendingApplications>
-                <numActiveApplications>1</numActiveApplications>
-              </user>
-            </users>
-            <userLimitFactor>100.0</userLimitFactor>
-          </queue>
-          <queue xsi:type="capacitySchedulerLeafQueueInfo">
-            <capacity>39.5</capacity>
-            <usedCapacity>0.0</usedCapacity>
-            <maxCapacity>100.0</maxCapacity>
-            <absoluteCapacity>35.3525</absoluteCapacity>
-            <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
-            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-            <numApplications>0</numApplications>
-            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-            <queueName>b2</queueName>
-            <state>RUNNING</state>
-            <resourcesUsed>
-              <memory>0</memory>
-              <vCores>0</vCores>
-            </resourcesUsed>
-            <numActiveApplications>0</numActiveApplications>
-            <numPendingApplications>0</numPendingApplications>
-            <numContainers>0</numContainers>
-            <maxApplications>3535</maxApplications>
-            <maxApplicationsPerUser>353500</maxApplicationsPerUser>
-            <maxActiveApplications>1</maxActiveApplications>
-            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
-            <userLimit>100</userLimit>
-            <users/>
-            <userLimitFactor>100.0</userLimitFactor>
-          </queue>
-          <queue xsi:type="capacitySchedulerLeafQueueInfo">
-            <capacity>0.5</capacity>
-            <usedCapacity>0.0</usedCapacity>
-            <maxCapacity>100.0</maxCapacity>
-            <absoluteCapacity>0.4475</absoluteCapacity>
-            <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
-            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
-            <numApplications>0</numApplications>
-            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
-            <queueName>b3</queueName>
-            <state>RUNNING</state>
-            <resourcesUsed>
-              <memory>0</memory>
-              <vCores>0</vCores>
-            </resourcesUsed>
-            <numActiveApplications>0</numActiveApplications>
-            <numPendingApplications>0</numPendingApplications>
-            <numContainers>0</numContainers>
-            <maxApplications>44</maxApplications>
-            <maxApplicationsPerUser>4400</maxApplicationsPerUser>
-            <maxActiveApplications>1</maxActiveApplications>
-            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
-            <userLimit>100</userLimit>
-            <users/>
-            <userLimitFactor>100.0</userLimitFactor>
-          </queue>
-        </queues>
-        <resourcesUsed>
-          <memory>0</memory>
-          <vCores>0</vCores>
-        </resourcesUsed>
-      </queue>
-    </queues>
-  </schedulerInfo>
-</scheduler>
-+---+
-
-** Fifo Scheduler API
-
-** Elements of the <schedulerInfo> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| type | string | Scheduler type - fifoScheduler |
-*---------------+--------------+-------------------------------+
-| capacity | float | Queue capacity in percentage |
-*---------------+--------------+-------------------------------+
-| usedCapacity | float | Used queue capacity in percentage |
-*---------------+--------------+-------------------------------+
-| qstate | string | State of the queue - valid values are: STOPPED, RUNNING|
-*---------------+--------------+-------------------------------+
-| minQueueMemoryCapacity | int | Minimum queue memory capacity |
-*---------------+--------------+-------------------------------+
-| maxQueueMemoryCapacity | int | Maximum queue memory capacity |
-*---------------+--------------+-------------------------------+
-| numNodes | int | The total number of nodes |
-*---------------+--------------+-------------------------------+
-| usedNodeCapacity | int | The used node capacity |
-*---------------+--------------+-------------------------------+
-| availNodeCapacity | int | The available node capacity |
-*---------------+--------------+-------------------------------+
-| totalNodeCapacity | int | The total node capacity |
-*---------------+--------------+-------------------------------+
-| numContainers | int | The number of containers |
-*---------------+--------------+-------------------------------+
-
-*** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/scheduler
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "scheduler":
-  {
-    "schedulerInfo":
-    {
-      "type":"fifoScheduler",
-      "capacity":1,
-      "usedCapacity":"NaN",
-      "qstate":"RUNNING",
-      "minQueueMemoryCapacity":1024,
-      "maxQueueMemoryCapacity":10240,
-      "numNodes":0,
-      "usedNodeCapacity":0,
-      "availNodeCapacity":0,
-      "totalNodeCapacity":0,
-      "numContainers":0
-    }
-  }
-}
-+---+
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/scheduler
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 432
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<scheduler>
-  <schedulerInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="fifoScheduler">
-    <capacity>1.0</capacity>
-    <usedCapacity>NaN</usedCapacity>
-    <qstate>RUNNING</qstate>
-    <minQueueMemoryCapacity>1024</minQueueMemoryCapacity>
-    <maxQueueMemoryCapacity>10240</maxQueueMemoryCapacity>
-    <numNodes>0</numNodes>
-    <usedNodeCapacity>0</usedNodeCapacity>
-    <availNodeCapacity>0</availNodeCapacity>
-    <totalNodeCapacity>0</totalNodeCapacity>
-    <numContainers>0</numContainers>
-  </schedulerInfo>
-</scheduler>
-+---+
-
-* {Cluster Applications API}
-
-  With the Applications API, you can obtain a collection of resources, each of which represents an application. When you run a GET operation on this resource, you obtain a collection of Application Objects.
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/apps
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-  Multiple parameters can be specified for GET operations.  The started and finished times have a begin and end parameter to allow you to specify ranges.  For example, one could request all applications that started between 1:00am and 2:00pm on 12/19/2011 with startedTimeBegin=1324256400&startedTimeEnd=1324303200. If the Begin parameter is not specified, it defaults to 0, and if the End parameter is not specified, it defaults to infinity.
-
-------
-  * state [deprecated] - state of the application
-  * states - applications matching the given application states, specified as a comma-separated list.
-  * finalStatus - the final status of the application - reported by the application itself
-  * user - user name
-  * queue - queue name
-  * limit - total number of app objects to be returned
-  * startedTimeBegin - applications with start time beginning with this time, specified in ms since epoch
-  * startedTimeEnd - applications with start time ending with this time, specified in ms since epoch
-  * finishedTimeBegin - applications with finish time beginning with this time, specified in ms since epoch
-  * finishedTimeEnd - applications with finish time ending with this time, specified in ms since epoch
-  * applicationTypes - applications matching the given application types, specified as a comma-separated list.
-  * applicationTags - applications matching any of the given application tags, specified as a comma-separated list.
-------
-
-** Elements of the <apps> (Applications) object
-
-  When you make a request for the list of applications, the information will be returned as a collection of app objects. 
-  See also {{Application API}} for syntax of the app object.
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| app | array of app objects(JSON)/zero or more application objects(XML) | The collection of application objects |
-*---------------+--------------+--------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/apps
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "apps":
-  {
-    "app":
-    [
-       {
-          "finishedTime" : 1326815598530,
-          "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326815542473_0001_01_000001",
-          "trackingUI" : "History",
-          "state" : "FINISHED",
-          "user" : "user1",
-          "id" : "application_1326815542473_0001",
-          "clusterId" : 1326815542473,
-          "finalStatus" : "SUCCEEDED",
-          "amHostHttpAddress" : "host.domain.com:8042",
-          "progress" : 100,
-          "name" : "word count",
-          "startedTime" : 1326815573334,
-          "elapsedTime" : 25196,
-          "diagnostics" : "",
-          "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326815542473_0001/jobhistory/job/job_1326815542473_1_1",
-          "queue" : "default",
-          "allocatedMB" : 0,
-          "allocatedVCores" : 0,
-          "runningContainers" : 0,
-          "memorySeconds" : 151730,
-          "vcoreSeconds" : 103
-       },
-       {
-          "finishedTime" : 1326815789546,
-          "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326815542473_0002_01_000001",
-          "trackingUI" : "History",
-          "state" : "FINISHED",
-          "user" : "user1",
-          "id" : "application_1326815542473_0002",
-          "clusterId" : 1326815542473,
-          "finalStatus" : "SUCCEEDED",
-          "amHostHttpAddress" : "host.domain.com:8042",
-          "progress" : 100,
-          "name" : "Sleep job",
-          "startedTime" : 1326815641380,
-          "elapsedTime" : 148166,
-          "diagnostics" : "",
-          "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326815542473_0002/jobhistory/job/job_1326815542473_2_2",
-          "queue" : "default",
-          "allocatedMB" : 0,
-          "allocatedVCores" : 0,
-          "runningContainers" : 1,
-          "memorySeconds" : 640064,
-          "vcoreSeconds" : 442
-       } 
-    ]
-  }
-}
-+---+
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/apps
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 2459
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<apps>
-  <app>
-    <id>application_1326815542473_0001</id>
-    <user>user1</user>
-    <name>word count</name>
-    <applicationType>MAPREDUCE</applicationType>
-    <queue>default</queue>
-    <state>FINISHED</state>
-    <finalStatus>SUCCEEDED</finalStatus>
-    <progress>100.0</progress>
-    <trackingUI>History</trackingUI>
-    <trackingUrl>http://host.domain.com:8088/proxy/application_1326815542473_0001/jobhistory/job
-/job_1326815542473_1_1</trackingUrl>
-    <diagnostics/>
-    <clusterId>1326815542473</clusterId>
-    <startedTime>1326815573334</startedTime>
-    <finishedTime>1326815598530</finishedTime>
-    <elapsedTime>25196</elapsedTime>
-    <amContainerLogs>http://host.domain.com:8042/node/containerlogs/container_1326815542473_0001
-_01_000001</amContainerLogs>
-    <amHostHttpAddress>host.domain.com:8042</amHostHttpAddress>
-    <allocatedMB>0</allocatedMB>
-    <allocatedVCores>0</allocatedVCores>
-    <runningContainers>0</runningContainers>
-    <memorySeconds>151730</memorySeconds>
-    <vcoreSeconds>103</vcoreSeconds>
-  </app>
-  <app>
-    <id>application_1326815542473_0002</id>
-    <user>user1</user>
-    <name>Sleep job</name>
-    <applicationType>YARN</applicationType>
-    <queue>default</queue>
-    <state>FINISHED</state>
-    <finalStatus>SUCCEEDED</finalStatus>
-    <progress>100.0</progress>
-    <trackingUI>History</trackingUI>
-    <trackingUrl>http://host.domain.com:8088/proxy/application_1326815542473_0002/jobhistory/job/job_1326815542473_2_2</trackingUrl>
-    <diagnostics/>
-    <clusterId>1326815542473</clusterId>
-    <startedTime>1326815641380</startedTime>
-    <finishedTime>1326815789546</finishedTime>
-    <elapsedTime>148166</elapsedTime>
-    <amContainerLogs>http://host.domain.com:8042/node/containerlogs/container_1326815542473_0002_01_000001</amContainerLogs>
-    <amHostHttpAddress>host.domain.com:8042</amHostHttpAddress>
-    <allocatedMB>0</allocatedMB>
-    <allocatedVCores>0</allocatedVCores>
-    <runningContainers>0</runningContainers>
-    <memorySeconds>640064</memorySeconds>
-    <vcoreSeconds>442</vcoreSeconds>
-  </app>
-</apps>
-+---+
-
-* Cluster Application Statistics API
-
-  With the Application Statistics API, you can obtain a collection of triples, each of which contains the application type, the application state and the number of applications of this type and this state in ResourceManager context. Note that with the performance concern, we currently only support at most one applicationType per query. We may support multiple applicationTypes per query as well as more statistics in the future. When you run a GET operation on this resource, you obtain a collection of statItem objects. 
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/appstatistics
-------
-
-** HTTP Operations Supported
-
-------
-  * GET
-------
-
-** Query Parameters Required
-
-  Two paramters can be specified. The parameters are case insensitive.
-
-------
-  * states - states of the applications, specified as a comma-separated list. If states is not provided, the API will enumerate all application states and return the counts of them.
-  * applicationTypes - types of the applications, specified as a comma-separated list. If applicationTypes is not provided, the API will count the applications of any application type. In this case, the response shows * to indicate any application type. Note that we only support at most one applicationType temporarily. Otherwise, users will expect an BadRequestException.
-------
-
-** Elements of the <appStatInfo> (statItems) object
-
-  When you make a request for the list of statistics items, the information will be returned as a collection of statItem objects
-
-*-----------+----------------------------------------------------------------------+-------------------------------------+
-|| Item     || Data Type                                                           || Description                        |
-*-----------+----------------------------------------------------------------------+-------------------------------------+
-| statItem  | array of statItem objects(JSON)/zero or more statItem objects(XML)   | The collection of statItem objects  |
-*-----------+----------------------------------------------------------------------+-------------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/appstatistics?states=accepted,running,finished&applicationTypes=mapreduce
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "appStatInfo":
-  {
-    "statItem":
-    [
-       {
-          "state" : "accepted",
-          "type" : "mapreduce",
-          "count" : 4
-       },
-       {
-          "state" : "running",
-          "type" : "mapreduce",
-          "count" : 1
-       },
-       {
-          "state" : "finished",
-          "type" : "mapreduce",
-          "count" : 7
-       }
-    ]
-  }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/appstatistics?states=accepted,running,finished&applicationTypes=mapreduce
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 2459
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<appStatInfo>
-  <statItem>
-    <state>accepted</state>
-    <type>mapreduce</type>
-    <count>4</count>
-  </statItem>
-  <statItem>
-    <state>running</state>
-    <type>mapreduce</type>
-    <count>1</count>
-  </statItem>
-  <statItem>
-    <state>finished</state>
-    <type>mapreduce</type>
-    <count>7</count>
-  </statItem>
-</appStatInfo>
-+---+
-
-* Cluster {Application API}
-
-  An application resource contains information about a particular application that was submitted to a cluster.
-
-** URI
-
-  Use the following URI to obtain an app object, from a application identified by the {appid} value.
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/apps/{appid}
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <app> (Application) object
-
-  Note that depending on security settings a user might not be able to see all the fields. 
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| id | string  | The application id | 
-*---------------+--------------+--------------------------------+
-| user | string  | The user who started the application |
-*---------------+--------------+--------------------------------+
-| name | string  | The application name |
-*---------------+--------------+--------------------------------+
-| Application Type | string  | The application type |
-*---------------+--------------+--------------------------------+
-| queue | string  | The queue the application was submitted to|
-*---------------+--------------+--------------------------------+
-| state         | string | The application state according to the ResourceManager - valid values are members of the YarnApplicationState enum: NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED|
-*---------------+--------------+--------------------------------+
-| finalStatus | string | The final status of the application if finished - reported by the application itself - valid values are: UNDEFINED, SUCCEEDED, FAILED, KILLED|
-*---------------+--------------+--------------------------------+
-| progress | float | The progress of the application as a percent | 
-*---------------+--------------+--------------------------------+
-| trackingUI | string | Where the tracking url is currently pointing  - History (for history server) or ApplicationMaster |
-*---------------+--------------+--------------------------------+
-| trackingUrl | string | The web URL that can be used to track the application |
-*---------------+--------------+--------------------------------+
-| diagnostics | string | Detailed diagnostics information |
-*---------------+--------------+--------------------------------+
-| clusterId | long | The cluster id |
-*---------------+--------------+--------------------------------+
-| startedTime | long | The time in which application started (in ms since epoch)|
-*---------------+--------------+--------------------------------+
-| finishedTime | long | The time in which the application finished (in ms since epoch) |
-*---------------+--------------+--------------------------------+
-| elapsedTime | long | The elapsed time since the application started (in ms)|
-*---------------+--------------+--------------------------------+
-| amContainerLogs | string | The URL of the application master container logs|
-*---------------+--------------+--------------------------------+
-| amHostHttpAddress | string | The nodes http address of the application master |
-*---------------+--------------+--------------------------------+
-| allocatedMB | int | The sum of memory in MB allocated to the application's running containers |
-*---------------+--------------+--------------------------------+
-| allocatedVCores | int | The sum of virtual cores allocated to the application's running containers |
-*---------------+--------------+--------------------------------+
-| runningContainers | int | The number of containers currently running for the application |
-*---------------+--------------+--------------------------------+
-| memorySeconds | long | The amount of memory the application has allocated (megabyte-seconds) |
-*---------------+--------------+--------------------------------+
-| vcoreSeconds | long | The amount of CPU resources the application has allocated (virtual core-seconds) |
-*---------------+--------------+--------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "app" : {
-      "finishedTime" : 1326824991300,
-      "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001",
-      "trackingUI" : "History",
-      "state" : "FINISHED",
-      "user" : "user1",
-      "id" : "application_1326821518301_0005",
-      "clusterId" : 1326821518301,
-      "finalStatus" : "SUCCEEDED",
-      "amHostHttpAddress" : "host.domain.com:8042",
-      "progress" : 100,
-      "name" : "Sleep job",
-      "applicationType" : "Yarn",
-      "startedTime" : 1326824544552,
-      "elapsedTime" : 446748,
-      "diagnostics" : "",
-      "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5",
-      "queue" : "a1",
-      "memorySeconds" : 151730,
-      "vcoreSeconds" : 103
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 847
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<app>
-  <id>application_1326821518301_0005</id>
-  <user>user1</user>
-  <name>Sleep job</name>
-  <queue>a1</queue>
-  <state>FINISHED</state>
-  <finalStatus>SUCCEEDED</finalStatus>
-  <progress>100.0</progress>
-  <trackingUI>History</trackingUI>
-  <trackingUrl>http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5</trackingUrl>
-  <diagnostics/>
-  <clusterId>1326821518301</clusterId>
-  <startedTime>1326824544552</startedTime>
-  <finishedTime>1326824991300</finishedTime>
-  <elapsedTime>446748</elapsedTime>
-  <amContainerLogs>http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001</amContainerLogs>
-  <amHostHttpAddress>host.domain.com:8042</amHostHttpAddress>
-  <memorySeconds>151730</memorySeconds>
-  <vcoreSeconds>103</vcoreSeconds>
-</app>
-+---+
-
-* Cluster Application Attempts API
-
-  With the application attempts API, you can obtain a collection of resources that represent an application attempt.  When you run a GET operation on this resource, you obtain a collection of App Attempt Objects. 
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/apps/{appid}/appattempts
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <appAttempts> object
-
-  When you make a request for the list of app attempts, the information will be returned as an array of app attempt objects. 
-
-  appAttempts:
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| appAttempt | array of app attempt objects(JSON)/zero or more app attempt objects(XML) | The collection of app attempt objects |
-*---------------+--------------+--------------------------------+
-
-** Elements of the <appAttempt> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| id | string | The app attempt id |
-*---------------+--------------+--------------------------------+
-| nodeId | string | The node id of the node the attempt ran on|
-*---------------+--------------+--------------------------------+
-| nodeHttpAddress | string | The node http address of the node the attempt ran on|
-*---------------+--------------+--------------------------------+
-| logsLink | string | The http link to the app attempt logs |
-*---------------+--------------+--------------------------------+
-| containerId | string | The id of the container for the app attempt |
-*---------------+--------------+--------------------------------+
-| startTime | long | The start time of the attempt (in ms since epoch)|
-*---------------+--------------+--------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005/appattempts
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "appAttempts" : {
-      "appAttempt" : [
-         {
-            "nodeId" : "host.domain.com:8041",
-            "nodeHttpAddress" : "host.domain.com:8042",
-            "startTime" : 1326381444693,
-            "id" : 1,
-            "logsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001/user1",
-            "containerId" : "container_1326821518301_0005_01_000001"
-         }
-      ]
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005/appattempts
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 575
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<appAttempts>
-  <appttempt>
-    <nodeHttpAddress>host.domain.com:8042</nodeHttpAddress>
-    <nodeId>host.domain.com:8041</nodeId>
-    <id>1</id>
-    <startTime>1326381444693</startTime>
-    <containerId>container_1326821518301_0005_01_000001</containerId>
-    <logsLink>http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001/user1</logsLink>
-  </appAttempt>
-</appAttempts>
-+---+
-
-* Cluster Nodes API
-
-  With the Nodes API, you can obtain a collection of resources, each of which represents a node. When you run a GET operation on this resource, you obtain a collection of Node Objects. 
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/nodes
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  * state - the state of the node
-  * healthy - true or false 
-------
-
-** Elements of the <nodes> object
-
-  When you make a request for the list of nodes, the information will be returned as a collection of node objects. 
-  See also {{Node API}} for syntax of the node object.
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| node | array of node objects(JSON)/zero or more node objects(XML) | A collection of node objects |
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/nodes
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "nodes":
-  {
-    "node":
-    [
-      {
-        "rack":"\/default-rack",
-        "state":"NEW",
-        "id":"h2:1235",
-        "nodeHostName":"h2",
-        "nodeHTTPAddress":"h2:2",
-        "healthStatus":"Healthy",
-        "lastHealthUpdate":1324056895432,
-        "healthReport":"Healthy",
-        "numContainers":0,
-        "usedMemoryMB":0,
-        "availMemoryMB":8192,
-        "usedVirtualCores":0,
-        "availableVirtualCores":8
-      },
-      {
-        "rack":"\/default-rack",
-        "state":"NEW",
-        "id":"h1:1234",
-        "nodeHostName":"h1",
-        "nodeHTTPAddress":"h1:2",
-        "healthStatus":"Healthy",
-        "lastHealthUpdate":1324056895092,
-        "healthReport":"Healthy",
-        "numContainers":0,
-        "usedMemoryMB":0,
-        "availMemoryMB":8192,
-        "usedVirtualCores":0,
-        "availableVirtualCores":8
-      }
-    ]
-  }
-}
-+---+
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/nodes
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 1104
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<nodes>
-  <node>
-    <rack>/default-rack</rack>
-    <state>RUNNING</state>
-    <id>h2:1234</id>
-    <nodeHostName>h2</nodeHostName>
-    <nodeHTTPAddress>h2:2</nodeHTTPAddress>
-    <healthStatus>Healthy</healthStatus>
-    <lastHealthUpdate>1324333268447</lastHealthUpdate>
-    <healthReport>Healthy</healthReport>
-    <numContainers>0</numContainers>
-    <usedMemoryMB>0</usedMemoryMB>
-    <availMemoryMB>5120</availMemoryMB>
-    <usedVirtualCores>0</usedVirtualCores>
-    <availableVirtualCores>8</availableVirtualCores>
-  </node>
-  <node>
-    <rack>/default-rack</rack>
-    <state>RUNNING</state>
-    <id>h1:1234</id>
-    <nodeHostName>h1</nodeHostName>
-    <nodeHTTPAddress>h1:2</nodeHTTPAddress>
-    <healthStatus>Healthy</healthStatus>
-    <lastHealthUpdate>1324333268447</lastHealthUpdate>
-    <healthReport>Healthy</healthReport>
-    <numContainers>0</numContainers>
-    <usedMemoryMB>0</usedMemoryMB>
-    <availMemoryMB>5120</availMemoryMB>
-    <usedVirtualCores>0</usedVirtualCores>
-    <availableVirtualCores>8</availableVirtualCores>
-  </node>
-</nodes>
-+---+
-
-
-* Cluster {Node API}
-
-  A node resource contains information about a node in the cluster.  
-
-** URI
-
-  Use the following URI to obtain a Node Object, from a node identified by the {nodeid} value. 
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/nodes/{nodeid}
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <node> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| rack | string | The rack location of this node |
-*---------------+--------------+-------------------------------+
-| state | string | State of the node - valid values are: NEW, RUNNING, UNHEALTHY, DECOMMISSIONED, LOST, REBOOTED |
-*---------------+--------------+-------------------------------+
-| id | string  | The node id |
-*---------------+--------------+-------------------------------+
-| nodeHostName | string  | The host name of the node|
-*---------------+--------------+-------------------------------+
-| nodeHTTPAddress | string  | The nodes HTTP address|
-*---------------+--------------+-------------------------------+
-| healthStatus | string  | The health status of the node - Healthy or Unhealthy |
-*---------------+--------------+-------------------------------+
-| healthReport | string  | A detailed health report |
-*---------------+--------------+-------------------------------+
-| lastHealthUpdate | long | The last time the node reported its health (in ms since epoch)|
-*---------------+--------------+-------------------------------+
-| usedMemoryMB | long | The total amount of memory currently used on the node (in MB)|
-*---------------+--------------+-------------------------------+
-| availMemoryMB | long | The total amount of memory currently available on the node (in MB)|
-*---------------+--------------+-------------------------------+
-| usedVirtualCores | long | The total number of vCores currently used on the node |
-*---------------+--------------+-------------------------------+
-| availableVirtualCores | long | The total number of vCores available on the node |
-*---------------+--------------+-------------------------------+
-| numContainers | int | The total number of containers currently running on the node|
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/nodes/h2:1235
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "node":
-  {
-    "rack":"\/default-rack",
-    "state":"NEW",
-    "id":"h2:1235",
-    "nodeHostName":"h2",
-    "nodeHTTPAddress":"h2:2",
-    "healthStatus":"Healthy",
-    "lastHealthUpdate":1324056895432,
-    "healthReport":"Healthy",
-    "numContainers":0,
-    "usedMemoryMB":0,
-    "availMemoryMB":5120,
-    "usedVirtualCores":0,
-    "availableVirtualCores":8
-  }
-}
-+---+
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<rm http address:port>/ws/v1/cluster/node/h2:1235
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 552
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<node>
-  <rack>/default-rack</rack>
-  <state>NEW</state>
-  <id>h2:1235</id>
-  <nodeHostName>h2</nodeHostName>
-  <nodeHTTPAddress>h2:2</nodeHTTPAddress>
-  <healthStatus>Healthy</healthStatus>
-  <lastHealthUpdate>1324333268447</lastHealthUpdate>
-  <healthReport>Healthy</healthReport>
-  <numContainers>0</numContainers>
-  <usedMemoryMB>0</usedMemoryMB>
-  <availMemoryMB>5120</availMemoryMB>
-  <usedVirtualCores>0</usedVirtualCores>
-  <availableVirtualCores>5120</availableVirtualCores>
-</node>
-+---+
-
-* {Cluster Writeable APIs}
-
-  The setions below refer to APIs which allow to create and modify applications. These APIs are currently in alpha and may change in the future.
-
-* {Cluster New Application API}
-
-  With the New Application API, you can obtain an application-id which can then be used as part of the {{{Cluster_Applications_API(Submit_Application)}Cluster Submit Applications API}} to submit applications. The response also includes the maximum resource capabilities available on the cluster.
-
-   This feature is currently in the alpha stage and may change in the future.
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/apps/new-application
-------
-
-** HTTP Operations Supported
-
-------
-  * POST
-------
-
-** Query Parameters Supported
-
-------
-  * None
-------
-
-** Elements of the NewApplication object
-
-  The NewApplication response contains the following elements:
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| application-id | string      | The newly created application id |
-*---------------+--------------+--------------------------------+
-| maximum-resource-capabilities | object  | The maximum resource capabilities available on this cluster |
-*---------------+--------------+--------------------------------+
-
-  The <maximum-resource-capabilites> object contains the following elements:
-
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| memory        | int          | The maxiumim memory available for a container |
-*---------------+--------------+--------------------------------+
-| vCores        | int          | The maximum number of cores available for a container |
-*---------------+--------------+--------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  POST http://<rm http address:port>/ws/v1/cluster/apps/new-application
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  "application-id":"application_1404198295326_0003",
-  "maximum-resource-capability":
-    {
-      "memory":8192,
-      "vCores":32
-    }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  POST http://<rm http address:port>/ws/v1/cluster/apps/new-application
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 248
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<NewApplication>
-  <application-id>application_1404198295326_0003</application-id>
-  <maximum-resource-capability>
-    <memory>8192</memory>
-    <vCores>32</vCores>
-  </maximum-resource-capability>
-</NewApplication>
-+---+
-
-* {Cluster Applications API(Submit Application)}
-
-  The Submit Applications API can be used to submit applications. In case of submitting applications, you must first obtain an application-id using the {{{Cluster_New_Application_API}Cluster New Application API}}. The application-id must be part of the request body. The response contains a URL to the application page which can be used to track the state and progress of your application.
-
-** URI
-
-------
-  * http://<rm http address:port>/ws/v1/cluster/apps
-------
-
-** HTTP Operations Supported 
-
-------
-  * POST
-------
-
-** POST Response Examples
-
-  POST requests can be used to submit apps to the ResourceManager. As mentioned above, an application-id must be obtained first. Successful submissions result in a 202 response code and a Location header specifying where to get information about the app. Please note that in order to submit an app, you must have an authentication filter setup for the HTTP interface. The functionality requires that a username is set in the HttpServletRequest. If no filter is setup, the response will be an "UNAUTHORIZED" response.
-
-  Please note that this feature is currently in the alpha stage and may change in the future.
-
-*** Elements of the POST request object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| application-id | string      | The application id            |
-*---------------+--------------+-------------------------------+
-| application-name | string    | The application name          |
-*---------------+--------------+-------------------------------+
-| queue         | string       | The name of the queue to which the application should be submitted |
-*---------------+--------------+-------------------------------+
-| priority      | int          | The priority of the application |
-*---------------+--------------+-------------------------------+
-| am-container-spec | object   | The application master container launch context, described below |
-*---------------+--------------+-------------------------------+
-| unmanaged-AM  | boolean      | Is the application using an unmanaged application master |
-*---------------+--------------+-------------------------------+
-| max-app-attempts | int       | The max number of attempts for this application |
-*---------------+--------------+-------------------------------+
-| resource      | object       | The resources the application master requires, described below |
-*---------------+--------------+-------------------------------+
-| application-type | string    | The application type(MapReduce, Pig, Hive, etc) |
-*---------------+--------------+-------------------------------+
-| keep-containers-across-application-attempts | boolean | Should YARN keep the containers used by this application instead of destroying them |
-*---------------+--------------+-------------------------------+
-| application-tags | object    | List of application tags, please see the request examples on how to speciy the tags |
-*---------------+--------------+-------------------------------+
-
-  Elements of the <am-container-spec> object
-
-  The am-container-spec object should be used to provide the container launch context for the application master.
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| local-resources | object     | Object describing the resources that need to be localized, described below |
-*---------------+--------------+-------------------------------+
-| environment   | object       | Environment variables for your containers, specified as key value pairs |
-*---------------+--------------+-------------------------------+
-| commands      | object       | The commands for launching your container, in the order in which they should be executed |
-*---------------+--------------+-------------------------------+
-| service-data  | object       | Application specific service data; key is the name of the auxiliary servce, value is base-64 encoding of the data you wish to pass |
-*---------------+--------------+-------------------------------+
-| credentials   | object       | The credentials required for your application to run, described below |
-*---------------+--------------+-------------------------------+
-| application-acls | objec     | ACLs for your application; the key can be "VIEW_APP" or "MODIFY_APP", the value is the list of users with the permissions |
-*---------------+--------------+-------------------------------+
-
-  Elements of the <local-resources> object
-
-  The object is a collection of key-value pairs. They key is an identifier for the resources to be localized and the value is the details of the resource. The elements of the value are described below:
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| resource      | string       | Location of the resource to be localized |
-*---------------+--------------+-------------------------------+
-| type          | string       | Type of the resource; options are "ARCHIVE", "FILE", and "PATTERN" |
-*---------------+--------------+-------------------------------+
-| visibility    | string       | Visibility the resource to be localized; options are "PUBLIC", "PRIVATE", and "APPLICATION" |
-*---------------+--------------+-------------------------------+
-| size          | long         | Size of the resource to be localized |
-*---------------+--------------+-------------------------------+
-| timestamp     | long         | Timestamp of the resource to be localized |
-*---------------+--------------+-------------------------------+
-
-  Elements of the <credentials> object
-
-  The credentials object should be used to pass data required for the application to authenticate itself such as delegation-tokens and secrets.
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| tokens        | object       | Tokens that you wish to pass to your application, specified as key-value pairs. The key is an identifier for the token and the value is the token(which should be obtained using the respective web-services) |
-*---------------+--------------+-------------------------------+
-| secrets       | object       | Secrets that you wish to use in your application, specified as key-value pairs. They key is an identifier and the value is the base-64 encoding of the secret |
-*---------------+--------------+-------------------------------+
-
-
-  Elements of the POST request body <resource> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| memory        | int          | Memory required for each container |
-*---------------+--------------+-------------------------------+
-| vCores        | int          | Virtual cores required for each container |
-*---------------+--------------+-------------------------------+
-
-  <<JSON response>>
-
-  HTTP Request:
-
-+---+
-  POST http://<rm http address:port>/ws/v1/cluster/apps
-  Accept: application/json
-  Content-Type: application/json
-  {
-    "application-id":"application_1404203615263_0001",
-    "application-name":"test",
-    "am-container-spec":
-    {
-      "local-resources":
-      {
-        "entry":
-        [
-          {
-            "key":"AppMaster.jar",
-            "value":
-            {
-              "resource":"hdfs://hdfs-namenode:9000/user/testuser/DistributedShell/demo-app/AppMaster.jar",
-              "type":"FILE",
-              "visibility":"APPLICATION",
-              "size": 43004,
-              "ti

<TRUNCATED>

[06/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
new file mode 100644
index 0000000..b1591bb
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRest.md
@@ -0,0 +1,2640 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+ResourceManager REST API's.
+===========================
+
+* [Overview](#Overview)
+* [Cluster Information API](#Cluster_Information_API)
+* [Cluster Metrics API](#Cluster_Metrics_API)
+* [Cluster Scheduler API](#Cluster_Scheduler_API)
+* [Cluster Applications API](#Cluster_Applications_API)
+* [Cluster Application Statistics API](#Cluster_Application_Statistics_API)
+* [Cluster Application API](#Cluster_Application_API)
+* [Cluster Application Attempts API](#Cluster_Application_Attempts_API)
+* [Cluster Nodes API](#Cluster_Nodes_API)
+* [Cluster Node API](#Cluster_Node_API)
+* [Cluster Writeable APIs](#Cluster_Writeable_APIs)
+* [Cluster New Application API](#Cluster_New_Application_API)
+* [Cluster Applications API(Submit Application)](#Cluster_Applications_APISubmit_Application)
+* [Cluster Application State API](#Cluster_Application_State_API)
+* [Cluster Application Queue API](#Cluster_Application_Queue_API)
+* [Cluster Delegation Tokens API](#Cluster_Delegation_Tokens_API)
+
+Overview
+--------
+
+The ResourceManager REST API's allow the user to get information about the cluster - status on the cluster, metrics on the cluster, scheduler information, information about nodes in the cluster, and information about applications on the cluster.
+
+Cluster Information API
+-----------------------
+
+The cluster information resource provides overall information about the cluster.
+
+### URI
+
+Both of the following URI's give you the cluster information.
+
+      * http://<rm http address:port>/ws/v1/cluster
+      * http://<rm http address:port>/ws/v1/cluster/info
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *clusterInfo* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| id | long | The cluster id |
+| startedOn | long | The time the cluster started (in ms since epoch) |
+| state | string | The ResourceManager state - valid values are: NOTINITED, INITED, STARTED, STOPPED |
+| haState | string | The ResourceManager HA state - valid values are: INITIALIZING, ACTIVE, STANDBY, STOPPED |
+| resourceManagerVersion | string | Version of the ResourceManager |
+| resourceManagerBuildVersion | string | ResourceManager build string with build version, user, and checksum |
+| resourceManagerVersionBuiltOn | string | Timestamp when ResourceManager was built (in ms since epoch) |
+| hadoopVersion | string | Version of hadoop common |
+| hadoopBuildVersion | string | Hadoop common build string with build version, user, and checksum |
+| hadoopVersionBuiltOn | string | Timestamp when hadoop common was built(in ms since epoch) |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/info
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "clusterInfo":
+  {
+    "id":1324053971963,
+    "startedOn":1324053971963,
+    "state":"STARTED",
+    "resourceManagerVersion":"0.23.1-SNAPSHOT",
+    "resourceManagerBuildVersion":"0.23.1-SNAPSHOT from 1214049 by user1 source checksum 050cd664439d931c8743a6428fd6a693",
+    "resourceManagerVersionBuiltOn":"Tue Dec 13 22:12:48 CST 2011",
+    "hadoopVersion":"0.23.1-SNAPSHOT",
+    "hadoopBuildVersion":"0.23.1-SNAPSHOT from 1214049 by user1 source checksum 11458df3bb77342dca5f917198fad328",
+    "hadoopVersionBuiltOn":"Tue Dec 13 22:12:26 CST 2011"
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      Accept: application/xml
+      GET http://<rm http address:port>/ws/v1/cluster/info
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 712
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<clusterInfo>
+  <id>1324053971963</id>
+  <startedOn>1324053971963</startedOn>
+  <state>STARTED</state>
+  <resourceManagerVersion>0.23.1-SNAPSHOT</resourceManagerVersion>
+  <resourceManagerBuildVersion>0.23.1-SNAPSHOT from 1214049 by user1 source checksum 050cd664439d931c8743a6428fd6a693</resourceManagerBuildVersion>
+  <resourceManagerVersionBuiltOn>Tue Dec 13 22:12:48 CST 2011</resourceManagerVersionBuiltOn>
+  <hadoopVersion>0.23.1-SNAPSHOT</hadoopVersion>
+  <hadoopBuildVersion>0.23.1-SNAPSHOT from 1214049 by user1 source checksum 11458df3bb77342dca5f917198fad328</hadoopBuildVersion>
+  <hadoopVersionBuiltOn>Tue Dec 13 22:12:48 CST 2011</hadoopVersionBuiltOn>
+</clusterInfo>
+```
+
+Cluster Metrics API
+-------------------
+
+The cluster metrics resource provides some overall metrics about the cluster. More detailed metrics should be retrieved from the jmx interface.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/metrics
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *clusterMetrics* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| appsSubmitted | int | The number of applications submitted |
+| appsCompleted | int | The number of applications completed |
+| appsPending | int | The number of applications pending |
+| appsRunning | int | The number of applications running |
+| appsFailed | int | The number of applications failed |
+| appsKilled | int | The number of applications killed |
+| reservedMB | long | The amount of memory reserved in MB |
+| availableMB | long | The amount of memory available in MB |
+| allocatedMB | long | The amount of memory allocated in MB |
+| totalMB | long | The amount of total memory in MB |
+| reservedVirtualCores | long | The number of reserved virtual cores |
+| availableVirtualCores | long | The number of available virtual cores |
+| allocatedVirtualCores | long | The number of allocated virtual cores |
+| totalVirtualCores | long | The total number of virtual cores |
+| containersAllocated | int | The number of containers allocated |
+| containersReserved | int | The number of containers reserved |
+| containersPending | int | The number of containers pending |
+| totalNodes | int | The total number of nodes |
+| activeNodes | int | The number of active nodes |
+| lostNodes | int | The number of lost nodes |
+| unhealthyNodes | int | The number of unhealthy nodes |
+| decommissionedNodes | int | The number of nodes decommissioned |
+| rebootedNodes | int | The number of nodes rebooted |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/metrics
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "clusterMetrics":
+  {
+    "appsSubmitted":0,
+    "appsCompleted":0,
+    "appsPending":0,
+    "appsRunning":0,
+    "appsFailed":0,
+    "appsKilled":0,
+    "reservedMB":0,
+    "availableMB":17408,
+    "allocatedMB":0,
+    "reservedVirtualCores":0,
+    "availableVirtualCores":7,
+    "allocatedVirtualCores":1,
+    "containersAllocated":0,
+    "containersReserved":0,
+    "containersPending":0,
+    "totalMB":17408,
+    "totalVirtualCores":8,
+    "totalNodes":1,
+    "lostNodes":0,
+    "unhealthyNodes":0,
+    "decommissionedNodes":0,
+    "rebootedNodes":0,
+    "activeNodes":1
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/metrics
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 432
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<clusterMetrics>
+  <appsSubmitted>0</appsSubmitted>
+  <appsCompleted>0</appsCompleted>
+  <appsPending>0</appsPending>
+  <appsRunning>0</appsRunning>
+  <appsFailed>0</appsFailed>
+  <appsKilled>0</appsKilled>
+  <reservedMB>0</reservedMB>
+  <availableMB>17408</availableMB>
+  <allocatedMB>0</allocatedMB>
+  <reservedVirtualCores>0</reservedVirtualCores>
+  <availableVirtualCores>7</availableVirtualCores>
+  <allocatedVirtualCores>1</allocatedVirtualCores>
+  <containersAllocated>0</containersAllocated>
+  <containersReserved>0</containersReserved>
+  <containersPending>0</containersPending>
+  <totalMB>17408</totalMB>
+  <totalVirtualCores>8</totalVirtualCores>
+  <totalNodes>1</totalNodes>
+  <lostNodes>0</lostNodes>
+  <unhealthyNodes>0</unhealthyNodes>
+  <decommissionedNodes>0</decommissionedNodes>
+  <rebootedNodes>0</rebootedNodes>
+  <activeNodes>1</activeNodes>
+</clusterMetrics>
+```
+
+Cluster Scheduler API
+---------------------
+
+A scheduler resource contains information about the current scheduler configured in a cluster. It currently supports both the Fifo and Capacity Scheduler. You will get different information depending on which scheduler is configured so be sure to look at the type information.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/scheduler
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Capacity Scheduler API
+
+The capacity scheduler supports hierarchical queues. This one request will print information about all the queues and any subqueues they have. Queues that can actually have jobs submitted to them are referred to as leaf queues. These queues have additional data associated with them.
+
+### Elements of the *schedulerInfo* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| type | string | Scheduler type - capacityScheduler |
+| capacity | float | Configured queue capacity in percentage relative to its parent queue |
+| usedCapacity | float | Used queue capacity in percentage |
+| maxCapacity | float | Configured maximum queue capacity in percentage relative to its parent queue |
+| queueName | string | Name of the queue |
+| queues | array of queues(JSON)/zero or more queue objects(XML) | A collection of queue resources |
+
+### Elements of the queues object for a Parent queue
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| capacity | float | Configured queue capacity in percentage relative to its parent queue |
+| usedCapacity | float | Used queue capacity in percentage |
+| maxCapacity | float | Configured maximum queue capacity in percentage relative to its parent queue |
+| absoluteCapacity | float | Absolute capacity percentage this queue can use of entire cluster |
+| absoluteMaxCapacity | float | Absolute maximum capacity percentage this queue can use of the entire cluster |
+| absoluteUsedCapacity | float | Absolute used capacity percentage this queue is using of the entire cluster |
+| numApplications | int | The number of applications currently in the queue |
+| usedResources | string | A string describing the current resources used by the queue |
+| queueName | string | The name of the queue |
+| state | string of QueueState | The state of the queue |
+| queues | array of queues(JSON)/zero or more queue objects(XML) | A collection of sub-queue information |
+| resourcesUsed | A single resource object | The total amount of resources used by this queue |
+
+### Elements of the queues object for a Leaf queue - contains all elements in parent plus the following:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| type | String | type of the queue - capacitySchedulerLeafQueueInfo |
+| numActiveApplications | int | The number of active applications in this queue |
+| numPendingApplications | int | The number of pending applications in this queue |
+| numContainers | int | The number of containers being used |
+| maxApplications | int | The maximum number of applications this queue can have |
+| maxApplicationsPerUser | int | The maximum number of applications per user this queue can have |
+| maxActiveApplications | int | The maximum number of active applications this queue can have |
+| maxActiveApplicationsPerUser | int | The maximum number of active applications per user this queue can have |
+| userLimit | int | The minimum user limit percent set in the configuration |
+| userLimitFactor | float | The user limit factor set in the configuration |
+| users | array of users(JSON)/zero or more user objects(XML) | A collection of user objects containing resources used |
+
+### Elements of the user object for users:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| username | String | The username of the user using the resources |
+| resourcesUsed | A single resource object | The amount of resources used by the user in this queue |
+| numActiveApplications | int | The number of active applications for this user in this queue |
+| numPendingApplications | int | The number of pending applications for this user in this queue |
+
+### Elements of the resource object for resourcesUsed in user and queues:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| memory | int | The amount of memory used (in MB) |
+| vCores | int | The number of virtual cores |
+
+#### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/scheduler
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+    "scheduler": {
+        "schedulerInfo": {
+            "capacity": 100.0, 
+            "maxCapacity": 100.0, 
+            "queueName": "root", 
+            "queues": {
+                "queue": [
+                    {
+                        "absoluteCapacity": 10.5, 
+                        "absoluteMaxCapacity": 50.0, 
+                        "absoluteUsedCapacity": 0.0, 
+                        "capacity": 10.5, 
+                        "maxCapacity": 50.0, 
+                        "numApplications": 0, 
+                        "queueName": "a", 
+                        "queues": {
+                            "queue": [
+                                {
+                                    "absoluteCapacity": 3.15, 
+                                    "absoluteMaxCapacity": 25.0, 
+                                    "absoluteUsedCapacity": 0.0, 
+                                    "capacity": 30.000002, 
+                                    "maxCapacity": 50.0, 
+                                    "numApplications": 0, 
+                                    "queueName": "a1", 
+                                    "queues": {
+                                        "queue": [
+                                            {
+                                                "absoluteCapacity": 2.6775, 
+                                                "absoluteMaxCapacity": 25.0, 
+                                                "absoluteUsedCapacity": 0.0, 
+                                                "capacity": 85.0, 
+                                                "maxActiveApplications": 1, 
+                                                "maxActiveApplicationsPerUser": 1, 
+                                                "maxApplications": 267, 
+                                                "maxApplicationsPerUser": 267, 
+                                                "maxCapacity": 100.0, 
+                                                "numActiveApplications": 0, 
+                                                "numApplications": 0, 
+                                                "numContainers": 0, 
+                                                "numPendingApplications": 0, 
+                                                "queueName": "a1a", 
+                                                "resourcesUsed": {
+                                                    "memory": 0, 
+                                                    "vCores": 0
+                                                }, 
+                                                "state": "RUNNING", 
+                                                "type": "capacitySchedulerLeafQueueInfo", 
+                                                "usedCapacity": 0.0, 
+                                                "usedResources": "<memory:0, vCores:0>", 
+                                                "userLimit": 100, 
+                                                "userLimitFactor": 1.0, 
+                                                "users": null
+                                            }, 
+                                            {
+                                                "absoluteCapacity": 0.47250003, 
+                                                "absoluteMaxCapacity": 25.0, 
+                                                "absoluteUsedCapacity": 0.0, 
+                                                "capacity": 15.000001, 
+                                                "maxActiveApplications": 1, 
+                                                "maxActiveApplicationsPerUser": 1, 
+                                                "maxApplications": 47, 
+                                                "maxApplicationsPerUser": 47, 
+                                                "maxCapacity": 100.0, 
+                                                "numActiveApplications": 0, 
+                                                "numApplications": 0, 
+                                                "numContainers": 0, 
+                                                "numPendingApplications": 0, 
+                                                "queueName": "a1b", 
+                                                "resourcesUsed": {
+                                                    "memory": 0, 
+                                                    "vCores": 0
+                                                }, 
+                                                "state": "RUNNING", 
+                                                "type": "capacitySchedulerLeafQueueInfo", 
+                                                "usedCapacity": 0.0, 
+                                                "usedResources": "<memory:0, vCores:0>", 
+                                                "userLimit": 100, 
+                                                "userLimitFactor": 1.0, 
+                                                "users": null
+                                            }
+                                        ]
+                                    }, 
+                                    "resourcesUsed": {
+                                        "memory": 0, 
+                                        "vCores": 0
+                                    }, 
+                                    "state": "RUNNING", 
+                                    "usedCapacity": 0.0, 
+                                    "usedResources": "<memory:0, vCores:0>"
+                                }, 
+                                {
+                                    "absoluteCapacity": 7.35, 
+                                    "absoluteMaxCapacity": 50.0, 
+                                    "absoluteUsedCapacity": 0.0, 
+                                    "capacity": 70.0, 
+                                    "maxActiveApplications": 1, 
+                                    "maxActiveApplicationsPerUser": 100, 
+                                    "maxApplications": 735, 
+                                    "maxApplicationsPerUser": 73500, 
+                                    "maxCapacity": 100.0, 
+                                    "numActiveApplications": 0, 
+                                    "numApplications": 0, 
+                                    "numContainers": 0, 
+                                    "numPendingApplications": 0, 
+                                    "queueName": "a2", 
+                                    "resourcesUsed": {
+                                        "memory": 0, 
+                                        "vCores": 0
+                                    }, 
+                                    "state": "RUNNING", 
+                                    "type": "capacitySchedulerLeafQueueInfo", 
+                                    "usedCapacity": 0.0, 
+                                    "usedResources": "<memory:0, vCores:0>", 
+                                    "userLimit": 100, 
+                                    "userLimitFactor": 100.0, 
+                                    "users": null
+                                }
+                            ]
+                        }, 
+                        "resourcesUsed": {
+                            "memory": 0, 
+                            "vCores": 0
+                        }, 
+                        "state": "RUNNING", 
+                        "usedCapacity": 0.0, 
+                        "usedResources": "<memory:0, vCores:0>"
+                    }, 
+                    {
+                        "absoluteCapacity": 89.5, 
+                        "absoluteMaxCapacity": 100.0, 
+                        "absoluteUsedCapacity": 0.0, 
+                        "capacity": 89.5, 
+                        "maxCapacity": 100.0, 
+                        "numApplications": 2, 
+                        "queueName": "b", 
+                        "queues": {
+                            "queue": [
+                                {
+                                    "absoluteCapacity": 53.7, 
+                                    "absoluteMaxCapacity": 100.0, 
+                                    "absoluteUsedCapacity": 0.0, 
+                                    "capacity": 60.000004, 
+                                    "maxActiveApplications": 1, 
+                                    "maxActiveApplicationsPerUser": 100, 
+                                    "maxApplications": 5370, 
+                                    "maxApplicationsPerUser": 537000, 
+                                    "maxCapacity": 100.0, 
+                                    "numActiveApplications": 1, 
+                                    "numApplications": 2, 
+                                    "numContainers": 0, 
+                                    "numPendingApplications": 1, 
+                                    "queueName": "b1", 
+                                    "resourcesUsed": {
+                                        "memory": 0, 
+                                        "vCores": 0
+                                    }, 
+                                    "state": "RUNNING", 
+                                    "type": "capacitySchedulerLeafQueueInfo", 
+                                    "usedCapacity": 0.0, 
+                                    "usedResources": "<memory:0, vCores:0>", 
+                                    "userLimit": 100, 
+                                    "userLimitFactor": 100.0, 
+                                    "users": {
+                                        "user": [
+                                            {
+                                                "numActiveApplications": 0, 
+                                                "numPendingApplications": 1, 
+                                                "resourcesUsed": {
+                                                    "memory": 0, 
+                                                    "vCores": 0
+                                                }, 
+                                                "username": "user2"
+                                            }, 
+                                            {
+                                                "numActiveApplications": 1, 
+                                                "numPendingApplications": 0, 
+                                                "resourcesUsed": {
+                                                    "memory": 0, 
+                                                    "vCores": 0
+                                                }, 
+                                                "username": "user1"
+                                            }
+                                        ]
+                                    }
+                                }, 
+                                {
+                                    "absoluteCapacity": 35.3525, 
+                                    "absoluteMaxCapacity": 100.0, 
+                                    "absoluteUsedCapacity": 0.0, 
+                                    "capacity": 39.5, 
+                                    "maxActiveApplications": 1, 
+                                    "maxActiveApplicationsPerUser": 100, 
+                                    "maxApplications": 3535, 
+                                    "maxApplicationsPerUser": 353500, 
+                                    "maxCapacity": 100.0, 
+                                    "numActiveApplications": 0, 
+                                    "numApplications": 0, 
+                                    "numContainers": 0, 
+                                    "numPendingApplications": 0, 
+                                    "queueName": "b2", 
+                                    "resourcesUsed": {
+                                        "memory": 0, 
+                                        "vCores": 0
+                                    }, 
+                                    "state": "RUNNING", 
+                                    "type": "capacitySchedulerLeafQueueInfo", 
+                                    "usedCapacity": 0.0, 
+                                    "usedResources": "<memory:0, vCores:0>", 
+                                    "userLimit": 100, 
+                                    "userLimitFactor": 100.0, 
+                                    "users": null
+                                }, 
+                                {
+                                    "absoluteCapacity": 0.4475, 
+                                    "absoluteMaxCapacity": 100.0, 
+                                    "absoluteUsedCapacity": 0.0, 
+                                    "capacity": 0.5, 
+                                    "maxActiveApplications": 1, 
+                                    "maxActiveApplicationsPerUser": 100, 
+                                    "maxApplications": 44, 
+                                    "maxApplicationsPerUser": 4400, 
+                                    "maxCapacity": 100.0, 
+                                    "numActiveApplications": 0, 
+                                    "numApplications": 0, 
+                                    "numContainers": 0, 
+                                    "numPendingApplications": 0, 
+                                    "queueName": "b3", 
+                                    "resourcesUsed": {
+                                        "memory": 0, 
+                                        "vCores": 0
+                                    }, 
+                                    "state": "RUNNING", 
+                                    "type": "capacitySchedulerLeafQueueInfo", 
+                                    "usedCapacity": 0.0, 
+                                    "usedResources": "<memory:0, vCores:0>", 
+                                    "userLimit": 100, 
+                                    "userLimitFactor": 100.0, 
+                                    "users": null
+                                }
+                            ]
+                        }, 
+                        "resourcesUsed": {
+                            "memory": 0, 
+                            "vCores": 0
+                        }, 
+                        "state": "RUNNING", 
+                        "usedCapacity": 0.0, 
+                        "usedResources": "<memory:0, vCores:0>"
+                    }
+                ]
+            }, 
+            "type": "capacityScheduler", 
+            "usedCapacity": 0.0
+        }
+    }
+}
+```json
+
+**XML response**
+
+HTTP Request:
+
+      Accept: application/xml
+      GET http://<rm http address:port>/ws/v1/cluster/scheduler
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 5778
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<scheduler>
+  <schedulerInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="capacityScheduler">
+    <capacity>100.0</capacity>
+    <usedCapacity>0.0</usedCapacity>
+    <maxCapacity>100.0</maxCapacity>
+    <queueName>root</queueName>
+    <queues>
+      <queue>
+        <capacity>10.5</capacity>
+        <usedCapacity>0.0</usedCapacity>
+        <maxCapacity>50.0</maxCapacity>
+        <absoluteCapacity>10.5</absoluteCapacity>
+        <absoluteMaxCapacity>50.0</absoluteMaxCapacity>
+        <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+        <numApplications>0</numApplications>
+        <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+        <queueName>a</queueName>
+        <state>RUNNING</state>
+        <queues>
+          <queue>
+            <capacity>30.000002</capacity>
+            <usedCapacity>0.0</usedCapacity>
+            <maxCapacity>50.0</maxCapacity>
+            <absoluteCapacity>3.15</absoluteCapacity>
+            <absoluteMaxCapacity>25.0</absoluteMaxCapacity>
+            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+            <numApplications>0</numApplications>
+            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+            <queueName>a1</queueName>
+            <state>RUNNING</state>
+            <queues>
+              <queue xsi:type="capacitySchedulerLeafQueueInfo">
+                <capacity>85.0</capacity>
+                <usedCapacity>0.0</usedCapacity>
+                <maxCapacity>100.0</maxCapacity>
+                <absoluteCapacity>2.6775</absoluteCapacity>
+                <absoluteMaxCapacity>25.0</absoluteMaxCapacity>
+                <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+                <numApplications>0</numApplications>
+                <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+                <queueName>a1a</queueName>
+                <state>RUNNING</state>
+                <resourcesUsed>
+                  <memory>0</memory>
+                  <vCores>0</vCores>
+                </resourcesUsed>
+                <numActiveApplications>0</numActiveApplications>
+                <numPendingApplications>0</numPendingApplications>
+                <numContainers>0</numContainers>
+                <maxApplications>267</maxApplications>
+                <maxApplicationsPerUser>267</maxApplicationsPerUser>
+                <maxActiveApplications>1</maxActiveApplications>
+                <maxActiveApplicationsPerUser>1</maxActiveApplicationsPerUser>
+                <userLimit>100</userLimit>
+                <users/>
+                <userLimitFactor>1.0</userLimitFactor>
+              </queue>
+              <queue xsi:type="capacitySchedulerLeafQueueInfo">
+                <capacity>15.000001</capacity>
+                <usedCapacity>0.0</usedCapacity>
+                <maxCapacity>100.0</maxCapacity>
+                <absoluteCapacity>0.47250003</absoluteCapacity>
+                <absoluteMaxCapacity>25.0</absoluteMaxCapacity>
+                <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+                <numApplications>0</numApplications>
+                <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+                <queueName>a1b</queueName>
+                <state>RUNNING</state>
+                <resourcesUsed>
+                  <memory>0</memory>
+                  <vCores>0</vCores>
+                </resourcesUsed>
+                <numActiveApplications>0</numActiveApplications>
+                <numPendingApplications>0</numPendingApplications>
+                <numContainers>0</numContainers>
+                <maxApplications>47</maxApplications>
+                <maxApplicationsPerUser>47</maxApplicationsPerUser>
+                <maxActiveApplications>1</maxActiveApplications>
+                <maxActiveApplicationsPerUser>1</maxActiveApplicationsPerUser>
+                <userLimit>100</userLimit>
+                <users/>
+                <userLimitFactor>1.0</userLimitFactor>
+              </queue>
+            </queues>
+            <resourcesUsed>
+              <memory>0</memory>
+              <vCores>0</vCores>
+            </resourcesUsed>
+          </queue>
+          <queue xsi:type="capacitySchedulerLeafQueueInfo">
+            <capacity>70.0</capacity>
+            <usedCapacity>0.0</usedCapacity>
+            <maxCapacity>100.0</maxCapacity>
+            <absoluteCapacity>7.35</absoluteCapacity>
+            <absoluteMaxCapacity>50.0</absoluteMaxCapacity>
+            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+            <numApplications>0</numApplications>
+            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+            <queueName>a2</queueName>
+            <state>RUNNING</state>
+            <resourcesUsed>
+              <memory>0</memory>
+              <vCores>0</vCores>
+            </resourcesUsed>
+            <numActiveApplications>0</numActiveApplications>
+            <numPendingApplications>0</numPendingApplications>
+            <numContainers>0</numContainers>
+            <maxApplications>735</maxApplications>
+            <maxApplicationsPerUser>73500</maxApplicationsPerUser>
+            <maxActiveApplications>1</maxActiveApplications>
+            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
+            <userLimit>100</userLimit>
+            <users/>
+            <userLimitFactor>100.0</userLimitFactor>
+          </queue>
+        </queues>
+        <resourcesUsed>
+          <memory>0</memory>
+          <vCores>0</vCores>
+        </resourcesUsed>
+      </queue>
+      <queue>
+        <capacity>89.5</capacity>
+        <usedCapacity>0.0</usedCapacity>
+        <maxCapacity>100.0</maxCapacity>
+        <absoluteCapacity>89.5</absoluteCapacity>
+        <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
+        <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+        <numApplications>2</numApplications>
+        <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+        <queueName>b</queueName>
+        <state>RUNNING</state>
+        <queues>
+          <queue xsi:type="capacitySchedulerLeafQueueInfo">
+            <capacity>60.000004</capacity>
+            <usedCapacity>0.0</usedCapacity>
+            <maxCapacity>100.0</maxCapacity>
+            <absoluteCapacity>53.7</absoluteCapacity>
+            <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
+            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+            <numApplications>2</numApplications>
+            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+            <queueName>b1</queueName>
+            <state>RUNNING</state>
+            <resourcesUsed>
+              <memory>0</memory>
+              <vCores>0</vCores>
+            </resourcesUsed>
+            <numActiveApplications>1</numActiveApplications>
+            <numPendingApplications>1</numPendingApplications>
+            <numContainers>0</numContainers>
+            <maxApplications>5370</maxApplications>
+            <maxApplicationsPerUser>537000</maxApplicationsPerUser>
+            <maxActiveApplications>1</maxActiveApplications>
+            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
+            <userLimit>100</userLimit>
+            <users>
+              <user>
+                <username>user2</username>
+                <resourcesUsed>
+                  <memory>0</memory>
+                  <vCores>0</vCores>
+                </resourcesUsed>
+                <numPendingApplications>1</numPendingApplications>
+                <numActiveApplications>0</numActiveApplications>
+              </user>
+              <user>
+                <username>user1</username>
+                <resourcesUsed>
+                  <memory>0</memory>
+                  <vCores>0</vCores>
+                </resourcesUsed>
+                <numPendingApplications>0</numPendingApplications>
+                <numActiveApplications>1</numActiveApplications>
+              </user>
+            </users>
+            <userLimitFactor>100.0</userLimitFactor>
+          </queue>
+          <queue xsi:type="capacitySchedulerLeafQueueInfo">
+            <capacity>39.5</capacity>
+            <usedCapacity>0.0</usedCapacity>
+            <maxCapacity>100.0</maxCapacity>
+            <absoluteCapacity>35.3525</absoluteCapacity>
+            <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
+            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+            <numApplications>0</numApplications>
+            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+            <queueName>b2</queueName>
+            <state>RUNNING</state>
+            <resourcesUsed>
+              <memory>0</memory>
+              <vCores>0</vCores>
+            </resourcesUsed>
+            <numActiveApplications>0</numActiveApplications>
+            <numPendingApplications>0</numPendingApplications>
+            <numContainers>0</numContainers>
+            <maxApplications>3535</maxApplications>
+            <maxApplicationsPerUser>353500</maxApplicationsPerUser>
+            <maxActiveApplications>1</maxActiveApplications>
+            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
+            <userLimit>100</userLimit>
+            <users/>
+            <userLimitFactor>100.0</userLimitFactor>
+          </queue>
+          <queue xsi:type="capacitySchedulerLeafQueueInfo">
+            <capacity>0.5</capacity>
+            <usedCapacity>0.0</usedCapacity>
+            <maxCapacity>100.0</maxCapacity>
+            <absoluteCapacity>0.4475</absoluteCapacity>
+            <absoluteMaxCapacity>100.0</absoluteMaxCapacity>
+            <absoluteUsedCapacity>0.0</absoluteUsedCapacity>
+            <numApplications>0</numApplications>
+            <usedResources>&lt;memory:0, vCores:0&gt;</usedResources>
+            <queueName>b3</queueName>
+            <state>RUNNING</state>
+            <resourcesUsed>
+              <memory>0</memory>
+              <vCores>0</vCores>
+            </resourcesUsed>
+            <numActiveApplications>0</numActiveApplications>
+            <numPendingApplications>0</numPendingApplications>
+            <numContainers>0</numContainers>
+            <maxApplications>44</maxApplications>
+            <maxApplicationsPerUser>4400</maxApplicationsPerUser>
+            <maxActiveApplications>1</maxActiveApplications>
+            <maxActiveApplicationsPerUser>100</maxActiveApplicationsPerUser>
+            <userLimit>100</userLimit>
+            <users/>
+            <userLimitFactor>100.0</userLimitFactor>
+          </queue>
+        </queues>
+        <resourcesUsed>
+          <memory>0</memory>
+          <vCores>0</vCores>
+        </resourcesUsed>
+      </queue>
+    </queues>
+  </schedulerInfo>
+</scheduler>
+```
+
+### Fifo Scheduler API
+
+### Elements of the *schedulerInfo* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| type | string | Scheduler type - fifoScheduler |
+| capacity | float | Queue capacity in percentage |
+| usedCapacity | float | Used queue capacity in percentage |
+| qstate | string | State of the queue - valid values are: STOPPED, RUNNING |
+| minQueueMemoryCapacity | int | Minimum queue memory capacity |
+| maxQueueMemoryCapacity | int | Maximum queue memory capacity |
+| numNodes | int | The total number of nodes |
+| usedNodeCapacity | int | The used node capacity |
+| availNodeCapacity | int | The available node capacity |
+| totalNodeCapacity | int | The total node capacity |
+| numContainers | int | The number of containers |
+
+#### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/scheduler
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "scheduler":
+  {
+    "schedulerInfo":
+    {
+      "type":"fifoScheduler",
+      "capacity":1,
+      "usedCapacity":"NaN",
+      "qstate":"RUNNING",
+      "minQueueMemoryCapacity":1024,
+      "maxQueueMemoryCapacity":10240,
+      "numNodes":0,
+      "usedNodeCapacity":0,
+      "availNodeCapacity":0,
+      "totalNodeCapacity":0,
+      "numContainers":0
+    }
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/scheduler
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 432
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<scheduler>
+  <schedulerInfo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="fifoScheduler">
+    <capacity>1.0</capacity>
+    <usedCapacity>NaN</usedCapacity>
+    <qstate>RUNNING</qstate>
+    <minQueueMemoryCapacity>1024</minQueueMemoryCapacity>
+    <maxQueueMemoryCapacity>10240</maxQueueMemoryCapacity>
+    <numNodes>0</numNodes>
+    <usedNodeCapacity>0</usedNodeCapacity>
+    <availNodeCapacity>0</availNodeCapacity>
+    <totalNodeCapacity>0</totalNodeCapacity>
+    <numContainers>0</numContainers>
+  </schedulerInfo>
+</scheduler>
+```
+
+Cluster Applications API
+------------------------
+
+With the Applications API, you can obtain a collection of resources, each of which represents an application. When you run a GET operation on this resource, you obtain a collection of Application Objects.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/apps
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+Multiple parameters can be specified for GET operations. The started and finished times have a begin and end parameter to allow you to specify ranges. For example, one could request all applications that started between 1:00am and 2:00pm on 12/19/2011 with startedTimeBegin=1324256400&startedTimeEnd=1324303200. If the Begin parameter is not specified, it defaults to 0, and if the End parameter is not specified, it defaults to infinity.
+
+      * state [deprecated] - state of the application
+      * states - applications matching the given application states, specified as a comma-separated list.
+      * finalStatus - the final status of the application - reported by the application itself
+      * user - user name
+      * queue - queue name
+      * limit - total number of app objects to be returned
+      * startedTimeBegin - applications with start time beginning with this time, specified in ms since epoch
+      * startedTimeEnd - applications with start time ending with this time, specified in ms since epoch
+      * finishedTimeBegin - applications with finish time beginning with this time, specified in ms since epoch
+      * finishedTimeEnd - applications with finish time ending with this time, specified in ms since epoch
+      * applicationTypes - applications matching the given application types, specified as a comma-separated list.
+      * applicationTags - applications matching any of the given application tags, specified as a comma-separated list.
+
+### Elements of the *apps* (Applications) object
+
+When you make a request for the list of applications, the information will be returned as a collection of app objects. See also [Application API](#Application_API) for syntax of the app object.
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| app | array of app objects(JSON)/zero or more application objects(XML) | The collection of application objects |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "apps":
+  {
+    "app":
+    [
+       {
+          "finishedTime" : 1326815598530,
+          "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326815542473_0001_01_000001",
+          "trackingUI" : "History",
+          "state" : "FINISHED",
+          "user" : "user1",
+          "id" : "application_1326815542473_0001",
+          "clusterId" : 1326815542473,
+          "finalStatus" : "SUCCEEDED",
+          "amHostHttpAddress" : "host.domain.com:8042",
+          "progress" : 100,
+          "name" : "word count",
+          "startedTime" : 1326815573334,
+          "elapsedTime" : 25196,
+          "diagnostics" : "",
+          "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326815542473_0001/jobhistory/job/job_1326815542473_1_1",
+          "queue" : "default",
+          "allocatedMB" : 0,
+          "allocatedVCores" : 0,
+          "runningContainers" : 0,
+          "memorySeconds" : 151730,
+          "vcoreSeconds" : 103
+       },
+       {
+          "finishedTime" : 1326815789546,
+          "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326815542473_0002_01_000001",
+          "trackingUI" : "History",
+          "state" : "FINISHED",
+          "user" : "user1",
+          "id" : "application_1326815542473_0002",
+          "clusterId" : 1326815542473,
+          "finalStatus" : "SUCCEEDED",
+          "amHostHttpAddress" : "host.domain.com:8042",
+          "progress" : 100,
+          "name" : "Sleep job",
+          "startedTime" : 1326815641380,
+          "elapsedTime" : 148166,
+          "diagnostics" : "",
+          "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326815542473_0002/jobhistory/job/job_1326815542473_2_2",
+          "queue" : "default",
+          "allocatedMB" : 0,
+          "allocatedVCores" : 0,
+          "runningContainers" : 1,
+          "memorySeconds" : 640064,
+          "vcoreSeconds" : 442
+       } 
+    ]
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 2459
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<apps>
+  <app>
+    <id>application_1326815542473_0001</id>
+    <user>user1</user>
+    <name>word count</name>
+    <applicationType>MAPREDUCE</applicationType>
+    <queue>default</queue>
+    <state>FINISHED</state>
+    <finalStatus>SUCCEEDED</finalStatus>
+    <progress>100.0</progress>
+    <trackingUI>History</trackingUI>
+    <trackingUrl>http://host.domain.com:8088/proxy/application_1326815542473_0001/jobhistory/job/job_1326815542473_1_1</trackingUrl>
+    <diagnostics/>
+    <clusterId>1326815542473</clusterId>
+    <startedTime>1326815573334</startedTime>
+    <finishedTime>1326815598530</finishedTime>
+    <elapsedTime>25196</elapsedTime>
+    <amContainerLogs>http://host.domain.com:8042/node/containerlogs/container_1326815542473_0001_01_000001</amContainerLogs>
+    <amHostHttpAddress>host.domain.com:8042</amHostHttpAddress>
+    <allocatedMB>0</allocatedMB>
+    <allocatedVCores>0</allocatedVCores>
+    <runningContainers>0</runningContainers>
+    <memorySeconds>151730</memorySeconds>
+    <vcoreSeconds>103</vcoreSeconds>
+  </app>
+  <app>
+    <id>application_1326815542473_0002</id>
+    <user>user1</user>
+    <name>Sleep job</name>
+    <applicationType>YARN</applicationType>
+    <queue>default</queue>
+    <state>FINISHED</state>
+    <finalStatus>SUCCEEDED</finalStatus>
+    <progress>100.0</progress>
+    <trackingUI>History</trackingUI>
+    <trackingUrl>http://host.domain.com:8088/proxy/application_1326815542473_0002/jobhistory/job/job_1326815542473_2_2</trackingUrl>
+    <diagnostics/>
+    <clusterId>1326815542473</clusterId>
+    <startedTime>1326815641380</startedTime>
+    <finishedTime>1326815789546</finishedTime>
+    <elapsedTime>148166</elapsedTime>
+    <amContainerLogs>http://host.domain.com:8042/node/containerlogs/container_1326815542473_0002_01_000001</amContainerLogs>
+    <amHostHttpAddress>host.domain.com:8042</amHostHttpAddress>
+    <allocatedMB>0</allocatedMB>
+    <allocatedVCores>0</allocatedVCores>
+    <runningContainers>0</runningContainers>
+    <memorySeconds>640064</memorySeconds>
+    <vcoreSeconds>442</vcoreSeconds>
+  </app>
+</apps>
+```
+
+Cluster Application Statistics API
+----------------------------------
+
+With the Application Statistics API, you can obtain a collection of triples, each of which contains the application type, the application state and the number of applications of this type and this state in ResourceManager context. Note that with the performance concern, we currently only support at most one applicationType per query. We may support multiple applicationTypes per query as well as more statistics in the future. When you run a GET operation on this resource, you obtain a collection of statItem objects.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/appstatistics
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Required
+
+Two paramters can be specified. The parameters are case insensitive.
+
+      * states - states of the applications, specified as a comma-separated list. If states is not provided, the API will enumerate all application states and return the counts of them.
+      * applicationTypes - types of the applications, specified as a comma-separated list. If applicationTypes is not provided, the API will count the applications of any application type. In this case, the response shows * to indicate any application type. Note that we only support at most one applicationType temporarily. Otherwise, users will expect an BadRequestException.
+
+### Elements of the *appStatInfo* (statItems) object
+
+When you make a request for the list of statistics items, the information will be returned as a collection of statItem objects
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| statItem | array of statItem objects(JSON)/zero or more statItem objects(XML) | The collection of statItem objects |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/appstatistics?states=accepted,running,finished&applicationTypes=mapreduce
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "appStatInfo":
+  {
+    "statItem":
+    [
+       {
+          "state" : "accepted",
+          "type" : "mapreduce",
+          "count" : 4
+       },
+       {
+          "state" : "running",
+          "type" : "mapreduce",
+          "count" : 1
+       },
+       {
+          "state" : "finished",
+          "type" : "mapreduce",
+          "count" : 7
+       }
+    ]
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/appstatistics?states=accepted,running,finished&applicationTypes=mapreduce
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 2459
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<appStatInfo>
+  <statItem>
+    <state>accepted</state>
+    <type>mapreduce</type>
+    <count>4</count>
+  </statItem>
+  <statItem>
+    <state>running</state>
+    <type>mapreduce</type>
+    <count>1</count>
+  </statItem>
+  <statItem>
+    <state>finished</state>
+    <type>mapreduce</type>
+    <count>7</count>
+  </statItem>
+</appStatInfo>
+```
+
+Cluster Application API
+-----------------------
+
+An application resource contains information about a particular application that was submitted to a cluster.
+
+### URI
+
+Use the following URI to obtain an app object, from a application identified by the appid value.
+
+      * http://<rm http address:port>/ws/v1/cluster/apps/{appid}
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *app* (Application) object
+
+Note that depending on security settings a user might not be able to see all the fields.
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| id | string | The application id |
+| user | string | The user who started the application |
+| name | string | The application name |
+| Application Type | string | The application type |
+| queue | string | The queue the application was submitted to |
+| state | string | The application state according to the ResourceManager - valid values are members of the YarnApplicationState enum: NEW, NEW\_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED |
+| finalStatus | string | The final status of the application if finished - reported by the application itself - valid values are: UNDEFINED, SUCCEEDED, FAILED, KILLED |
+| progress | float | The progress of the application as a percent |
+| trackingUI | string | Where the tracking url is currently pointing - History (for history server) or ApplicationMaster |
+| trackingUrl | string | The web URL that can be used to track the application |
+| diagnostics | string | Detailed diagnostics information |
+| clusterId | long | The cluster id |
+| startedTime | long | The time in which application started (in ms since epoch) |
+| finishedTime | long | The time in which the application finished (in ms since epoch) |
+| elapsedTime | long | The elapsed time since the application started (in ms) |
+| amContainerLogs | string | The URL of the application master container logs |
+| amHostHttpAddress | string | The nodes http address of the application master |
+| allocatedMB | int | The sum of memory in MB allocated to the application's running containers |
+| allocatedVCores | int | The sum of virtual cores allocated to the application's running containers |
+| runningContainers | int | The number of containers currently running for the application |
+| memorySeconds | long | The amount of memory the application has allocated (megabyte-seconds) |
+| vcoreSeconds | long | The amount of CPU resources the application has allocated (virtual core-seconds) |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "app" : {
+      "finishedTime" : 1326824991300,
+      "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001",
+      "trackingUI" : "History",
+      "state" : "FINISHED",
+      "user" : "user1",
+      "id" : "application_1326821518301_0005",
+      "clusterId" : 1326821518301,
+      "finalStatus" : "SUCCEEDED",
+      "amHostHttpAddress" : "host.domain.com:8042",
+      "progress" : 100,
+      "name" : "Sleep job",
+      "applicationType" : "Yarn",
+      "startedTime" : 1326824544552,
+      "elapsedTime" : 446748,
+      "diagnostics" : "",
+      "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5",
+      "queue" : "a1",
+      "memorySeconds" : 151730,
+      "vcoreSeconds" : 103
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 847
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<app>
+  <id>application_1326821518301_0005</id>
+  <user>user1</user>
+  <name>Sleep job</name>
+  <queue>a1</queue>
+  <state>FINISHED</state>
+  <finalStatus>SUCCEEDED</finalStatus>
+  <progress>100.0</progress>
+  <trackingUI>History</trackingUI>
+  <trackingUrl>http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5</trackingUrl>
+  <diagnostics/>
+  <clusterId>1326821518301</clusterId>
+  <startedTime>1326824544552</startedTime>
+  <finishedTime>1326824991300</finishedTime>
+  <elapsedTime>446748</elapsedTime>
+  <amContainerLogs>http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001</amContainerLogs>
+  <amHostHttpAddress>host.domain.com:8042</amHostHttpAddress>
+  <memorySeconds>151730</memorySeconds>
+  <vcoreSeconds>103</vcoreSeconds>
+</app>
+```
+
+Cluster Application Attempts API
+--------------------------------
+
+With the application attempts API, you can obtain a collection of resources that represent an application attempt. When you run a GET operation on this resource, you obtain a collection of App Attempt Objects.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/apps/{appid}/appattempts
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *appAttempts* object
+
+When you make a request for the list of app attempts, the information will be returned as an array of app attempt objects.
+
+appAttempts:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| appAttempt | array of app attempt objects(JSON)/zero or more app attempt objects(XML) | The collection of app attempt objects |
+
+### Elements of the *appAttempt* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| id | string | The app attempt id |
+| nodeId | string | The node id of the node the attempt ran on |
+| nodeHttpAddress | string | The node http address of the node the attempt ran on |
+| logsLink | string | The http link to the app attempt logs |
+| containerId | string | The id of the container for the app attempt |
+| startTime | long | The start time of the attempt (in ms since epoch) |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005/appattempts
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "appAttempts" : {
+      "appAttempt" : [
+         {
+            "nodeId" : "host.domain.com:8041",
+            "nodeHttpAddress" : "host.domain.com:8042",
+            "startTime" : 1326381444693,
+            "id" : 1,
+            "logsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001/user1",
+            "containerId" : "container_1326821518301_0005_01_000001"
+         }
+      ]
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1326821518301_0005/appattempts
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 575
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<appAttempts>
+  <appttempt>
+    <nodeHttpAddress>host.domain.com:8042</nodeHttpAddress>
+    <nodeId>host.domain.com:8041</nodeId>
+    <id>1</id>
+    <startTime>1326381444693</startTime>
+    <containerId>container_1326821518301_0005_01_000001</containerId>
+    <logsLink>http://host.domain.com:8042/node/containerlogs/container_1326821518301_0005_01_000001/user1</logsLink>
+  </appAttempt>
+</appAttempts>
+```
+
+Cluster Nodes API
+-----------------
+
+With the Nodes API, you can obtain a collection of resources, each of which represents a node. When you run a GET operation on this resource, you obtain a collection of Node Objects.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/nodes
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      * state - the state of the node
+      * healthy - true or false 
+
+### Elements of the *nodes* object
+
+When you make a request for the list of nodes, the information will be returned as a collection of node objects. See also [Node API](#Node_API) for syntax of the node object.
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| node | array of node objects(JSON)/zero or more node objects(XML) | A collection of node objects |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/nodes
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "nodes":
+  {
+    "node":
+    [
+      {
+        "rack":"\/default-rack",
+        "state":"NEW",
+        "id":"h2:1235",
+        "nodeHostName":"h2",
+        "nodeHTTPAddress":"h2:2",
+        "healthStatus":"Healthy",
+        "lastHealthUpdate":1324056895432,
+        "healthReport":"Healthy",
+        "numContainers":0,
+        "usedMemoryMB":0,
+        "availMemoryMB":8192,
+        "usedVirtualCores":0,
+        "availableVirtualCores":8
+      },
+      {
+        "rack":"\/default-rack",
+        "state":"NEW",
+        "id":"h1:1234",
+        "nodeHostName":"h1",
+        "nodeHTTPAddress":"h1:2",
+        "healthStatus":"Healthy",
+        "lastHealthUpdate":1324056895092,
+        "healthReport":"Healthy",
+        "numContainers":0,
+        "usedMemoryMB":0,
+        "availMemoryMB":8192,
+        "usedVirtualCores":0,
+        "availableVirtualCores":8
+      }
+    ]
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/nodes
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 1104
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<nodes>
+  <node>
+    <rack>/default-rack</rack>
+    <state>RUNNING</state>
+    <id>h2:1234</id>
+    <nodeHostName>h2</nodeHostName>
+    <nodeHTTPAddress>h2:2</nodeHTTPAddress>
+    <healthStatus>Healthy</healthStatus>
+    <lastHealthUpdate>1324333268447</lastHealthUpdate>
+    <healthReport>Healthy</healthReport>
+    <numContainers>0</numContainers>
+    <usedMemoryMB>0</usedMemoryMB>
+    <availMemoryMB>5120</availMemoryMB>
+    <usedVirtualCores>0</usedVirtualCores>
+    <availableVirtualCores>8</availableVirtualCores>
+  </node>
+  <node>
+    <rack>/default-rack</rack>
+    <state>RUNNING</state>
+    <id>h1:1234</id>
+    <nodeHostName>h1</nodeHostName>
+    <nodeHTTPAddress>h1:2</nodeHTTPAddress>
+    <healthStatus>Healthy</healthStatus>
+    <lastHealthUpdate>1324333268447</lastHealthUpdate>
+    <healthReport>Healthy</healthReport>
+    <numContainers>0</numContainers>
+    <usedMemoryMB>0</usedMemoryMB>
+    <availMemoryMB>5120</availMemoryMB>
+    <usedVirtualCores>0</usedVirtualCores>
+    <availableVirtualCores>8</availableVirtualCores>
+  </node>
+</nodes>
+```
+
+Cluster Node API
+----------------
+
+A node resource contains information about a node in the cluster.
+
+### URI
+
+Use the following URI to obtain a Node Object, from a node identified by the nodeid value.
+
+      * http://<rm http address:port>/ws/v1/cluster/nodes/{nodeid}
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *node* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| rack | string | The rack location of this node |
+| state | string | State of the node - valid values are: NEW, RUNNING, UNHEALTHY, DECOMMISSIONED, LOST, REBOOTED |
+| id | string | The node id |
+| nodeHostName | string | The host name of the node |
+| nodeHTTPAddress | string | The nodes HTTP address |
+| healthStatus | string | The health status of the node - Healthy or Unhealthy |
+| healthReport | string | A detailed health report |
+| lastHealthUpdate | long | The last time the node reported its health (in ms since epoch) |
+| usedMemoryMB | long | The total amount of memory currently used on the node (in MB) |
+| availMemoryMB | long | The total amount of memory currently available on the node (in MB) |
+| usedVirtualCores | long | The total number of vCores currently used on the node |
+| availableVirtualCores | long | The total number of vCores available on the node |
+| numContainers | int | The total number of containers currently running on the node |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/nodes/h2:1235
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "node":
+  {
+    "rack":"\/default-rack",
+    "state":"NEW",
+    "id":"h2:1235",
+    "nodeHostName":"h2",
+    "nodeHTTPAddress":"h2:2",
+    "healthStatus":"Healthy",
+    "lastHealthUpdate":1324056895432,
+    "healthReport":"Healthy",
+    "numContainers":0,
+    "usedMemoryMB":0,
+    "availMemoryMB":5120,
+    "usedVirtualCores":0,
+    "availableVirtualCores":8
+  }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<rm http address:port>/ws/v1/cluster/node/h2:1235
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 552
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<node>
+  <rack>/default-rack</rack>
+  <state>NEW</state>
+  <id>h2:1235</id>
+  <nodeHostName>h2</nodeHostName>
+  <nodeHTTPAddress>h2:2</nodeHTTPAddress>
+  <healthStatus>Healthy</healthStatus>
+  <lastHealthUpdate>1324333268447</lastHealthUpdate>
+  <healthReport>Healthy</healthReport>
+  <numContainers>0</numContainers>
+  <usedMemoryMB>0</usedMemoryMB>
+  <availMemoryMB>5120</availMemoryMB>
+  <usedVirtualCores>0</usedVirtualCores>
+  <availableVirtualCores>5120</availableVirtualCores>
+</node>
+```
+
+Cluster Writeable APIs
+----------------------
+
+The setions below refer to APIs which allow to create and modify applications. These APIs are currently in alpha and may change in the future.
+
+Cluster New Application API
+---------------------------
+
+With the New Application API, you can obtain an application-id which can then be used as part of the [Cluster Submit Applications API](#Cluster_Applications_APISubmit_Application) to submit applications. The response also includes the maximum resource capabilities available on the cluster.
+
+This feature is currently in the alpha stage and may change in the future.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/apps/new-application
+
+### HTTP Operations Supported
+
+      * POST
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the NewApplication object
+
+The NewApplication response contains the following elements:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| application-id | string | The newly created application id |
+| maximum-resource-capabilities | object | The maximum resource capabilities available on this cluster |
+
+The *maximum-resource-capabilites* object contains the following elements:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| memory | int | The maxiumim memory available for a container |
+| vCores | int | The maximum number of cores available for a container |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      POST http://<rm http address:port>/ws/v1/cluster/apps/new-application
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  "application-id":"application_1404198295326_0003",
+  "maximum-resource-capability":
+    {
+      "memory":8192,
+      "vCores":32
+    }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      POST http://<rm http address:port>/ws/v1/cluster/apps/new-application
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 248
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<NewApplication>
+  <application-id>application_1404198295326_0003</application-id>
+  <maximum-resource-capability>
+    <memory>8192</memory>
+    <vCores>32</vCores>
+  </maximum-resource-capability>
+</NewApplication>
+```
+
+Cluster Applications API(Submit Application)
+--------------------------------------------
+
+The Submit Applications API can be used to submit applications. In case of submitting applications, you must first obtain an application-id using the [Cluster New Application API](#Cluster_New_Application_API). The application-id must be part of the request body. The response contains a URL to the application page which can be used to track the state and progress of your application.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/apps
+
+### HTTP Operations Supported
+
+      * POST
+
+### POST Response Examples
+
+POST requests can be used to submit apps to the ResourceManager. As mentioned above, an application-id must be obtained first. Successful submissions result in a 202 response code and a Location header specifying where to get information about the app. Please note that in order to submit an app, you must have an authentication filter setup for the HTTP interface. The functionality requires that a username is set in the HttpServletRequest. If no filter is setup, the response will be an "UNAUTHORIZED" response.
+
+Please note that this feature is currently in the alpha stage and may change in the future.
+
+#### Elements of the POST request object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| application-id | string | The application id |
+| application-name | string | The application name |
+| queue | string | The name of the queue to which the application should be submitted |
+| priority | int | The priority of the application |
+| am-container-spec | object | The application master container launch context, described below |
+| unmanaged-AM | boolean | Is the application using an unmanaged application master |
+| max-app-attempts | int | The max number of attempts for this application |
+| resource | object | The resources the application master requires, described below |
+| application-type | string | The application type(MapReduce, Pig, Hive, etc) |
+| keep-containers-across-application-attempts | boolean | Should YARN keep the containers used by this application instead of destroying them |
+| application-tags | object | List of application tags, please see the request examples on how to speciy the tags |
+
+Elements of the *am-container-spec* object
+
+The am-container-spec object should be used to provide the container launch context for the application master.
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| local-resources | object | Object describing the resources that need to be localized, described below |
+| environment | object | Environment variables for your containers, specified as key value pairs |
+| commands | object | The commands for launching your container, in the order in which they should be executed |
+| service-data | object | Application specific service data; key is the name of the auxiliary servce, value is base-64 encoding of the data you wish to pass |
+| credentials | object | The credentials required for your application to run, described below |
+| application-acls | objec | ACLs for your application; the key can be "VIEW\_APP" or "MODIFY\_APP", the value is the list of users with the permissions |
+
+Elements of the *local-resources* object
+
+The object is a collection of key-value pairs. They key is an identifier for the resources to be localized and the value is the details of the resource. The elements of the value are described below:
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| resource | string | Location of the resource to be localized |
+| type | string | Type of the resource; options are "ARCHIVE", "FILE", and "PATTERN" |
+| visibility | string | Visibility the resource to be localized; options are "PUBLIC", "PRIVATE", and "APPLICATION" |
+| size | long | Size of the resource to be localized |
+| timestamp | long | Timestamp of the resource to be localized |
+
+Elements of the *credentials* object
+
+The credentials object should be used to pass data required for the application to authenticate itself such as delegation-tokens and secrets.
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| tokens | object | Tokens that you wish to pass to your application, specified as key-value pairs. The key is an identifier for the token and the value is the token(which should be obtained using the respective web-services) |
+| secrets | object | Secrets that you wish to use in your application, specified as key-value pairs. They key is an identifier and the value is the base-64 encoding of the secret |
+
+Elements of the POST request body *resource* object
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| memory | int | Memory required for each container |
+| vCores | int | Virtual cores required for each container |
+
+**JSON response**
+
+HTTP Request:
+
+```json
+  POST http://<rm http address:port>/ws/v1/cluster/apps
+  Accept: application/json
+  Content-Type: application/json
+  {
+    "application-id":"application_1404203615263_0001",
+    "application-name":"test",
+    "am-container-spec":
+    {
+      "local-resources":
+      {
+        "entry":
+        [
+          {
+            "key":"AppMaster.jar",
+            "value":
+            {
+              "resource":"hdfs://hdfs-namenode:9000/user/testuser/DistributedShell/demo-app/AppMaster.jar",
+              "type":"FILE",
+              "visibility":"APPLICATION",
+              "size": 43004,
+              "timestamp": 1405452071209
+            }
+          }
+        ]
+      },
+      "commands":
+      {
+        "command":"{{JAVA_HOME}}/bin/java -Xmx10m org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 1><LOG_DIR>/AppMaster.stdout 2><LOG_DIR>/AppMaster.stderr"
+      },
+      "environment":
+      {
+        "entry":
+        [
+          {
+            "key": "DISTRIBUTEDSHELLSCRIPTTIMESTAMP",
+            "value": "1405459400754"
+          },
+          {
+            "key": "CLASSPATH",
+            "value": "{{CLASSPATH}}<CPS>./*<CPS>{{HADOOP_CONF_DIR}}<CPS>{{HADOOP_COMMON_HOME}}/share/hadoop/common/*<CPS>{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*<CPS>{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*<CPS>{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*<CPS>{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*<CPS>{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*<CPS>./log4j.properties"
+          },
+          {
+            "key": "DISTRIBUTEDSHELLSCRIPTLEN",
+            "value": "6"
+          },
+          {
+            "key": "DISTRIBUTEDSHELLSCRIPTLOCATION",
+            "value": "hdfs://hdfs-namenode:9000/user/testuser/demo-app/shellCommands"
+          }
+        ]
+      }
+    },
+    "unmanaged-AM":false,
+    "max-app-attempts":2,
+    "resource":
+    {
+      "memory":1024,
+      "vCores":1
+    },
+    "application-type":"YARN",
+    "keep-containers-across-application-attempts":false
+  }
+```
+
+Response Header:
+
+      HTTP/1.1 202
+      Transfer-Encoding: chunked
+      Location: http://<rm http address:port>/ws/v1/cluster/apps/application_1404203615263_0001
+      Content-Type: application/json
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+      No response body
+
+**XML response**
+
+HTTP Request:
+
+```xml
+POST http://<rm http address:port>/ws/v1/cluster/apps
+Accept: application/xml
+Content-Type: application/xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<application-submission-context>
+  <application-id>application_1404204891930_0002</application-id>
+  <application-name>test</application-name>
+  <queue>testqueue</queue>
+  <priority>3</priority>
+  <am-container-spec>
+    <local-resources>
+      <entry>
+        <key>example</key>
+        <value>
+          <resource>hdfs://hdfs-namenode:9000/user/testuser/DistributedShell/demo-app/AppMaster.jar</resource>
+          <type>FILE</type>
+          <visibility>APPLICATION</visibility>
+          <size>43004</size>
+          <timestamp>1405452071209</timestamp>
+        </value>
+      </entry>
+    </local-resources>
+    <environment>
+      <entry>
+        <key>DISTRIBUTEDSHELLSCRIPTTIMESTAMP</key>
+        <value>1405459400754</value>
+      </entry>
+      <entry>
+        <key>CLASSPATH</key>
+        <value>{{CLASSPATH}}&lt;CPS&gt;./*&lt;CPS&gt;{{HADOOP_CONF_DIR}}&lt;CPS&gt;{{HADOOP_COMMON_HOME}}/share/hadoop/common/*&lt;CPS&gt;{{HADOOP_COMMON_HOME}}/share/hadoop/common/lib/*&lt;CPS&gt;{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/*&lt;CPS&gt;{{HADOOP_HDFS_HOME}}/share/hadoop/hdfs/lib/*&lt;CPS&gt;{{HADOOP_YARN_HOME}}/share/hadoop/yarn/*&lt;CPS&gt;{{HADOOP_YARN_HOME}}/share/hadoop/yarn/lib/*&lt;CPS&gt;./log4j.properties</value>
+      </entry>
+      <entry>
+        <key>DISTRIBUTEDSHELLSCRIPTLEN</key>
+        <value>6</value>
+      </entry>
+      <entry>
+        <key>DISTRIBUTEDSHELLSCRIPTLOCATION</key>
+        <value>hdfs://hdfs-namenode:9000/user/testuser/demo-app/shellCommands</value>
+      </entry>
+    </environment>
+    <commands>
+      <command>{{JAVA_HOME}}/bin/java -Xmx10m org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster --container_memory 10 --container_vcores 1 --num_containers 1 --priority 0 1&gt;&lt;LOG_DIR&gt;/AppMaster.stdout 2&gt;&lt;LOG_DIR&gt;/AppMaster.stderr</command>
+    </commands>
+    <service-data>
+      <entry>
+        <key>test</key>
+        <value>dmFsdWUxMg</value>
+      </entry>
+    </service-data>
+    <credentials>
+      <tokens/>
+      <secrets>
+        <entry>
+          <key>secret1</key>
+          <value>c2VjcmV0MQ</value>
+        </entry>
+      </secrets>
+    </credentials>
+    <application-acls>
+      <entry>
+        <key>VIEW_APP</key>
+        <value>testuser3, testuser4</value>
+      </entry>
+      <entry>
+        <key>MODIFY_APP</key>
+        <value>testuser1, testuser2</value>
+      </entry>
+    </application-acls>
+  </am-container-spec>
+  <unmanaged-AM>false</unmanaged-AM>
+  <max-app-attempts>2</max-app-attempts>
+  <resource>
+    <memory>1024</memory>
+    <vCores>1</vCores>
+  </resource>
+  <application-type>YARN</application-type>
+  <keep-containers-across-application-attempts>false</keep-containers-across-application-attempts>
+  <application-tags>
+    <tag>tag 2</tag>
+    <tag>tag1</tag>
+  </application-tags>
+</application-submission-context>
+```
+
+Response Header:
+
+      HTTP/1.1 202
+      Transfer-Encoding: chunked
+      Location: http://<rm http address:port>/ws/v1/cluster/apps/application_1404204891930_0002
+      Content-Type: application/xml
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+      No response body
+
+Cluster Application State API
+-----------------------------
+
+With the application state API, you can query the state of a submitted app as well kill a running app by modifying the state of a running app using a PUT request with the state set to "KILLED". To perform the PUT operation, authentication has to be setup for the RM web services. In addition, you must be authorized to kill the app. Currently you can only change the state to "KILLED"; an attempt to change the state to any other results in a 400 error response. Examples of the unauthorized and bad request errors are below. When you carry out a successful PUT, the iniital response may be a 202. You can confirm that the app is killed by repeating the PUT request until you get a 200, querying the state using the GET method or querying for app information and checking the state. In the examples below, we repeat the PUT request and get a 200 response.
+
+Please note that in order to kill an app, you must have an authentication filter setup for the HTTP interface. The functionality requires that a username is set in the HttpServletRequest. If no filter is setup, the response will be an "UNAUTHORIZED" response.
+
+This feature is currently in the alpha stage and may change in the future.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/apps/{appid}/state
+
+### HTTP Operations Supported
+
+      * GET
+      * PUT
+
+### Query Parameters Supported
+
+      None
+
+### Elements of *appstate* object
+
+When you make a request for the state of an app, the information returned has the following fields
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| state | string | The application state - can be one of "NEW", "NEW\_SAVING", "SUBMITTED", "ACCEPTED", "RUNNING", "FINISHED", "FAILED", "KILLED" |
+
+### Response Examples
+
+**JSON responses**
+
+HTTP Request
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/json
+    Transfer-Encoding: chunked
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    {
+      "state":"ACCEPTED"
+    }
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Request Body:
+
+    {
+      "state":"KILLED"
+    }
+
+Response Header:
+
+    HTTP/1.1 202 Accepted
+    Content-Type: application/json
+    Transfer-Encoding: chunked
+    Location: http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    {
+      "state":"ACCEPTED"
+    }
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Request Body:
+
+    {
+      "state":"KILLED"
+    }
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/json
+    Transfer-Encoding: chunked
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    {
+      "state":"KILLED"
+    }
+
+**XML responses**
+
+HTTP Request
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/xml
+    Content-Length: 99
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>ACCEPTED</state>
+    </appstate>
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Request Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>KILLED</state>
+    </appstate>
+
+Response Header:
+
+    HTTP/1.1 202 Accepted
+    Content-Type: application/xml
+    Content-Length: 794
+    Location: http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>ACCEPTED</state>
+    </appstate>
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Request Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>KILLED</state>
+    </appstate>
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/xml
+    Content-Length: 917
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>KILLED</state>
+    </appstate>
+
+**Unauthorized Error Response**
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Request Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>KILLED</state>
+    </appstate>
+
+Response Header:
+
+    HTTP/1.1 403 Unauthorized
+    Server: Jetty(6.1.26)
+
+**Bad Request Error Response**
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/state
+
+Request Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appstate>
+      <state>RUNNING</state>
+    </appstate>
+
+Response Header:
+
+    HTTP/1.1 400
+    Content-Length: 295
+    Content-Type: application/xml
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <RemoteException>
+      <exception>BadRequestException</exception>
+      <message>java.lang.Exception: Only 'KILLED' is allowed as a target state.</message>
+      <javaClassName>org.apache.hadoop.yarn.webapp.BadRequestException</javaClassName>
+    </RemoteException>
+
+Cluster Application Queue API
+-----------------------------
+
+With the application queue API, you can query the queue of a submitted app as well move a running app to another queue using a PUT request specifying the target queue. To perform the PUT operation, authentication has to be setup for the RM web services. In addition, you must be authorized to move the app. Currently you can only move the app if you're using the Capacity scheduler or the Fair scheduler.
+
+Please note that in order to move an app, you must have an authentication filter setup for the HTTP interface. The functionality requires that a username is set in the HttpServletRequest. If no filter is setup, the response will be an "UNAUTHORIZED" response.
+
+This feature is currently in the alpha stage and may change in the future.
+
+### URI
+
+      * http://<rm http address:port>/ws/v1/cluster/apps/{appid}/queue
+
+### HTTP Operations Supported
+
+      * GET
+      * PUT
+
+### Query Parameters Supported
+
+      None
+
+### Elements of *appqueue* object
+
+When you make a request for the state of an app, the information returned has the following fields
+
+| Item | Data Type | Description |
+|:---- |:---- |:---- |
+| queue | string | The application queue |
+
+### Response Examples
+
+**JSON responses**
+
+HTTP Request
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/queue
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/json
+    Transfer-Encoding: chunked
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    {
+      "queue":"default"
+    }
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/queue
+
+Request Body:
+
+    {
+      "queue":"test"
+    }
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/json
+    Transfer-Encoding: chunked
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    {
+      "queue":"test"
+    }
+
+**XML responses**
+
+HTTP Request
+
+      GET http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/queue
+
+Response Header:
+
+    HTTP/1.1 200 OK
+    Content-Type: application/xml
+    Content-Length: 98
+    Server: Jetty(6.1.26)
+
+Response Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appqueue>
+      <queue>default</queue>
+    </appqueue>
+
+HTTP Request
+
+      PUT http://<rm http address:port>/ws/v1/cluster/apps/application_1399397633663_0003/queue
+
+Request Body:
+
+    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+    <appqueue>
+      <queue>test</queue>
+ 

<TRUNCATED>

[12/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/2e44b75f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/2e44b75f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/2e44b75f

Branch: refs/heads/YARN-2928
Commit: 2e44b75f729009d33e309d1366bf86746443db81
Parents: edceced
Author: Allen Wittenauer <aw...@apache.org>
Authored: Fri Feb 27 20:39:44 2015 -0800
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Fri Feb 27 20:39:44 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                 |    3 +
 .../src/site/apt/CapacityScheduler.apt.vm       |  368 ---
 .../src/site/apt/DockerContainerExecutor.apt.vm |  204 --
 .../src/site/apt/FairScheduler.apt.vm           |  483 ---
 .../src/site/apt/NodeManager.apt.vm             |   64 -
 .../src/site/apt/NodeManagerCgroups.apt.vm      |   77 -
 .../src/site/apt/NodeManagerRest.apt.vm         |  645 ----
 .../src/site/apt/NodeManagerRestart.apt.vm      |   86 -
 .../src/site/apt/ResourceManagerHA.apt.vm       |  233 --
 .../src/site/apt/ResourceManagerRest.apt.vm     | 3104 ------------------
 .../src/site/apt/ResourceManagerRestart.apt.vm  |  298 --
 .../src/site/apt/SecureContainer.apt.vm         |  176 -
 .../src/site/apt/TimelineServer.apt.vm          |  260 --
 .../src/site/apt/WebApplicationProxy.apt.vm     |   49 -
 .../src/site/apt/WebServicesIntro.apt.vm        |  593 ----
 .../src/site/apt/WritingYarnApplications.apt.vm |  757 -----
 .../hadoop-yarn-site/src/site/apt/YARN.apt.vm   |   77 -
 .../src/site/apt/YarnCommands.apt.vm            |  369 ---
 .../hadoop-yarn-site/src/site/apt/index.apt.vm  |   82 -
 .../src/site/markdown/CapacityScheduler.md      |  186 ++
 .../site/markdown/DockerContainerExecutor.md.vm |  154 +
 .../src/site/markdown/FairScheduler.md          |  233 ++
 .../src/site/markdown/NodeManager.md            |   57 +
 .../src/site/markdown/NodeManagerCgroups.md     |   57 +
 .../src/site/markdown/NodeManagerRest.md        |  543 +++
 .../src/site/markdown/NodeManagerRestart.md     |   53 +
 .../src/site/markdown/ResourceManagerHA.md      |  140 +
 .../src/site/markdown/ResourceManagerRest.md    | 2640 +++++++++++++++
 .../src/site/markdown/ResourceManagerRestart.md |  181 +
 .../src/site/markdown/SecureContainer.md        |  135 +
 .../src/site/markdown/TimelineServer.md         |  231 ++
 .../src/site/markdown/WebApplicationProxy.md    |   24 +
 .../src/site/markdown/WebServicesIntro.md       |  569 ++++
 .../site/markdown/WritingYarnApplications.md    |  591 ++++
 .../hadoop-yarn-site/src/site/markdown/YARN.md  |   42 +
 .../src/site/markdown/YarnCommands.md           |  272 ++
 .../hadoop-yarn-site/src/site/markdown/index.md |   75 +
 37 files changed, 6186 insertions(+), 7925 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index e7af84b..02b1831 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -20,6 +20,9 @@ Trunk - Unreleased
     YARN-2980. Move health check script related functionality to hadoop-common
     (Varun Saxena via aw)
 
+    YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty
+    via aw)
+
   OPTIMIZATIONS
 
   BUG FIXES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
deleted file mode 100644
index 8528c1a..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
+++ /dev/null
@@ -1,368 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Capacity Scheduler
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop MapReduce Next Generation - Capacity Scheduler
-
-%{toc|section=1|fromDepth=0}
-
-* {Purpose} 
-
-  This document describes the <<<CapacityScheduler>>>, a pluggable scheduler 
-  for Hadoop which allows for multiple-tenants to securely share a large cluster 
-  such that their applications are allocated resources in a timely manner under 
-  constraints of allocated capacities.
-
-* {Overview}
-
-  The <<<CapacityScheduler>>> is designed to run Hadoop applications as a 
-  shared, multi-tenant cluster in an operator-friendly manner while maximizing 
-  the throughput and the utilization of the cluster.
-   
-  Traditionally each organization has it own private set of compute resources 
-  that have sufficient capacity to meet the organization's SLA under peak or 
-  near peak conditions. This generally leads to poor average utilization and 
-  overhead of managing multiple independent clusters, one per each organization. 
-  Sharing clusters between organizations is a cost-effective manner of running 
-  large Hadoop installations since this allows them to reap benefits of
-  economies of scale without creating private clusters. However, organizations 
-  are concerned about sharing a cluster because they are worried about others 
-  using the resources that are critical for their SLAs. 
-   
-  The <<<CapacityScheduler>>> is designed to allow sharing a large cluster while 
-  giving each organization capacity guarantees. The central idea is 
-  that the available resources in the Hadoop cluster are shared among multiple 
-  organizations who collectively fund the cluster based on their computing 
-  needs. There is an added benefit that an organization can access 
-  any excess capacity not being used by others. This provides elasticity for 
-  the organizations in a cost-effective manner.
-   
-  Sharing clusters across organizations necessitates strong support for
-  multi-tenancy since each organization must be guaranteed capacity and 
-  safe-guards to ensure the shared cluster is impervious to single rouge 
-  application or user or sets thereof. The <<<CapacityScheduler>>> provides a 
-  stringent set of limits to ensure that a single application or user or queue 
-  cannot consume disproportionate amount of resources in the cluster. Also, the 
-  <<<CapacityScheduler>>> provides limits on initialized/pending applications 
-  from a single user and queue to ensure fairness and stability of the cluster.
-   
-  The primary abstraction provided by the <<<CapacityScheduler>>> is the concept 
-  of <queues>. These queues are typically setup by administrators to reflect the 
-  economics of the shared cluster. 
-  
-  To provide further control and predictability on sharing of resources, the 
-  <<<CapacityScheduler>>> supports <hierarchical queues> to ensure 
-  resources are shared among the sub-queues of an organization before other 
-  queues are allowed to use free resources, there-by providing <affinity> 
-  for sharing free resources among applications of a given organization.
-   
-* {Features}
-
-  The <<<CapacityScheduler>>> supports the following features:
-  
-  * Hierarchical Queues - Hierarchy of queues is supported to ensure resources 
-    are shared among the sub-queues of an organization before other 
-    queues are allowed to use free resources, there-by providing more control
-    and predictability.
-    
-  * Capacity Guarantees - Queues are allocated a fraction of the capacity of the 
-    grid in the sense that a certain capacity of resources will be at their 
-    disposal. All applications submitted to a queue will have access to the 
-    capacity allocated to the queue. Adminstrators can configure soft limits and 
-    optional hard limits on the capacity allocated to each queue.
-    
-  * Security - Each queue has strict ACLs which controls which users can submit 
-    applications to individual queues. Also, there are safe-guards to ensure 
-    that users cannot view and/or modify applications from other users.
-    Also, per-queue and system administrator roles are supported.
-    
-  * Elasticity - Free resources can be allocated to any queue beyond it's 
-    capacity. When there is demand for these resources from queues running below 
-    capacity at a future point in time, as tasks scheduled on these resources 
-    complete, they will be assigned to applications on queues running below the
-    capacity (pre-emption is not supported). This ensures that resources are available 
-    in a predictable and elastic manner to queues, thus preventing artifical silos 
-    of resources in the cluster which helps utilization.
-    
-  * Multi-tenancy - Comprehensive set of limits are provided to prevent a 
-    single application, user and queue from monopolizing resources of the queue 
-    or the cluster as a whole to ensure that the cluster isn't overwhelmed.
-    
-  * Operability
-  
-    * Runtime Configuration - The queue definitions and properties such as 
-      capacity, ACLs can be changed, at runtime, by administrators in a secure 
-      manner to minimize disruption to users. Also, a console is provided for 
-      users and administrators to view current allocation of resources to 
-      various queues in the system. Administrators can <add additional queues> 
-      at runtime, but queues cannot be <deleted> at runtime.
-      
-    * Drain applications - Administrators can <stop> queues
-      at runtime to ensure that while existing applications run to completion,
-      no new applications can be submitted. If a queue is in <<<STOPPED>>> 
-      state, new applications cannot be submitted to <itself> or 
-      <any of its child queueus>. Existing applications continue to completion, 
-      thus the queue can be <drained> gracefully.  Administrators can also 
-      <start> the stopped queues. 
-    
-  * Resource-based Scheduling - Support for resource-intensive applications, 
-    where-in a application can optionally specify higher resource-requirements 
-    than the default, there-by accomodating applications with differing resource
-    requirements. Currently, <memory> is the the resource requirement supported.
-  
-  []
-  
-* {Configuration}
-
-  * Setting up <<<ResourceManager>>> to use <<<CapacityScheduler>>>
-  
-    To configure the <<<ResourceManager>>> to use the <<<CapacityScheduler>>>, set
-    the following property in the <<conf/yarn-site.xml>>:
-  
-*--------------------------------------+--------------------------------------+
-|| Property                            || Value                                |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.scheduler.class>>> | |
-| | <<<org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler>>> |
-*--------------------------------------+--------------------------------------+
-
-  * Setting up <queues>
-   
-    <<conf/capacity-scheduler.xml>> is the configuration file for the
-    <<<CapacityScheduler>>>.  
-  
-    The <<<CapacityScheduler>>> has a pre-defined queue called <root>. All 
-    queueus in the system are children of the root queue.
-
-    Further queues can be setup by configuring 
-    <<<yarn.scheduler.capacity.root.queues>>> with a list of comma-separated
-    child queues.
-    
-    The configuration for <<<CapacityScheduler>>> uses a concept called
-    <queue path> to configure the hierarchy of queues. The <queue path> is the
-    full path of the queue's hierarchy, starting at <root>, with . (dot) as the 
-    delimiter.
-    
-    A given queue's children can be defined with the configuration knob:
-    <<<yarn.scheduler.capacity.<queue-path>.queues>>>. Children do not 
-    inherit properties directly from the parent unless otherwise noted.
-
-    Here is an example with three top-level child-queues <<<a>>>, <<<b>>> and 
-    <<<c>>> and some sub-queues for <<<a>>> and <<<b>>>:
-     
-----    
-<property>
-  <name>yarn.scheduler.capacity.root.queues</name>
-  <value>a,b,c</value>
-  <description>The queues at the this level (root is the root queue).
-  </description>
-</property>
-
-<property>
-  <name>yarn.scheduler.capacity.root.a.queues</name>
-  <value>a1,a2</value>
-  <description>The queues at the this level (root is the root queue).
-  </description>
-</property>
-
-<property>
-  <name>yarn.scheduler.capacity.root.b.queues</name>
-  <value>b1,b2,b3</value>
-  <description>The queues at the this level (root is the root queue).
-  </description>
-</property>
-----    
-
-  * Queue Properties
-  
-    * Resource Allocation
-  
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                         |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.capacity>>> | |
-| | Queue <capacity> in percentage (%) as a float (e.g. 12.5).| 
-| | The sum of capacities for all queues, at each level, must be equal |
-| | to 100. | 
-| | Applications in the queue may consume more resources than the queue's | 
-| | capacity if there are free resources, providing elasticity. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.maximum-capacity>>> |   | 
-| | Maximum queue capacity in percentage (%) as a float. |
-| | This limits the <elasticity> for applications in the queue. |
-| | Defaults to -1 which disables it. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.minimum-user-limit-percent>>> |   | 
-| | Each queue enforces a limit on the percentage of resources allocated to a | 
-| | user at any given time, if there is demand for resources. The user limit | 
-| | can vary between a minimum and maximum value. The the former |
-| | (the minimum value) is set to this property value and the latter |
-| | (the maximum value) depends on the number of users who have submitted |
-| | applications. For e.g., suppose the value of this property is 25. | 
-| | If two users have submitted applications to a queue, no single user can |
-| | use more than 50% of the queue resources. If a third user submits an | 
-| | application, no single user can use more than 33% of the queue resources. |
-| | With 4 or more users, no user can use more than 25% of the queues |
-| | resources. A value of 100 implies no user limits are imposed. The default |
-| | is 100. Value is specified as a integer.|
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.user-limit-factor>>> |   | 
-| | The multiple of the queue capacity which can be configured to allow a | 
-| | single user to acquire more resources. By default this is set to 1 which | 
-| | ensures that a single user can never take more than the queue's configured | 
-| | capacity irrespective of how idle th cluster is. Value is specified as |
-| | a float.|
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.maximum-allocation-mb>>> |   |
-| | The per queue maximum limit of memory to allocate to each container |
-| | request at the Resource Manager. This setting overrides the cluster |
-| | configuration <<<yarn.scheduler.maximum-allocation-mb>>>. This value |
-| | must be smaller than or equal to the cluster maximum. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.maximum-allocation-vcores>>> |   |
-| | The per queue maximum limit of virtual cores to allocate to each container |
-| | request at the Resource Manager. This setting overrides the cluster |
-| | configuration <<<yarn.scheduler.maximum-allocation-vcores>>>. This value |
-| | must be smaller than or equal to the cluster maximum. |
-*--------------------------------------+--------------------------------------+
-
-    * Running and Pending Application Limits
-    
-    
-    The <<<CapacityScheduler>>> supports the following parameters to control 
-    the running and pending applications:
-    
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                         |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.maximum-applications>>> /  |
-| <<<yarn.scheduler.capacity.<queue-path>.maximum-applications>>>  | |
-| | Maximum number of applications in the system which can be concurrently |
-| | active both running and pending. Limits on each queue are directly |
-| | proportional to their queue capacities and user limits. This is a 
-| | hard limit and any applications submitted when this limit is reached will |
-| | be rejected. Default is 10000. This can be set for all queues with |
-| | <<<yarn.scheduler.capacity.maximum-applications>>> and can also be overridden on a  |
-| | per queue basis by setting <<<yarn.scheduler.capacity.<queue-path>.maximum-applications>>>. |
-| | Integer value expected.|
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.maximum-am-resource-percent>>> / |
-| <<<yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent>>> | |
-| | Maximum percent of resources in the cluster which can be used to run |
-| | application masters - controls number of concurrent active applications. Limits on each |
-| | queue are directly proportional to their queue capacities and user limits. |
-| | Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with |
-| | <<<yarn.scheduler.capacity.maximum-am-resource-percent>>> and can also be overridden on a  |
-| | per queue basis by setting <<<yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent>>> |
-*--------------------------------------+--------------------------------------+
-
-    * Queue Administration & Permissions
-    
-    The <<<CapacityScheduler>>> supports the following parameters to  
-    the administer the queues:
-    
-    
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                         |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.<queue-path>.state>>> | |
-| | The <state> of the queue. Can be one of <<<RUNNING>>> or <<<STOPPED>>>. |
-| | If a queue is in <<<STOPPED>>> state, new applications cannot be |
-| | submitted to <itself> or <any of its child queues>. | 
-| | Thus, if the <root> queue is <<<STOPPED>>> no applications can be | 
-| | submitted to the entire cluster. |
-| | Existing applications continue to completion, thus the queue can be 
-| | <drained> gracefully. Value is specified as Enumeration. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.root.<queue-path>.acl_submit_applications>>> | |
-| | The <ACL> which controls who can <submit> applications to the given queue. |
-| | If the given user/group has necessary ACLs on the given queue or |
-| | <one of the parent queues in the hierarchy> they can submit applications. |
-| | <ACLs> for this property <are> inherited from the parent queue |
-| | if not specified. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.root.<queue-path>.acl_administer_queue>>> | |
-| | The <ACL> which controls who can <administer> applications on the given queue. |
-| | If the given user/group has necessary ACLs on the given queue or |
-| | <one of the parent queues in the hierarchy> they can administer applications. |
-| | <ACLs> for this property <are> inherited from the parent queue |
-| | if not specified. |
-*--------------------------------------+--------------------------------------+
-    
-    <Note:> An <ACL> is of the form <user1>, <user2><space><group1>, <group2>.
-    The special value of <<*>> implies <anyone>. The special value of <space>
-    implies <no one>. The default is <<*>> for the root queue if not specified.
-
-  * Other Properties
-
-    * Resource Calculator
-
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                         |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.resource-calculator>>> | |
-| | The ResourceCalculator implementation to be used to compare Resources in the |
-| | scheduler. The default i.e. org.apache.hadoop.yarn.util.resource.DefaultResourseCalculator |
-| | only uses Memory while DominantResourceCalculator uses Dominant-resource |
-| | to compare multi-dimensional resources such as Memory, CPU etc. A Java |
-| | ResourceCalculator class name is expected. |
-*--------------------------------------+--------------------------------------+
-
-
-    * Data Locality
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                         |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.scheduler.capacity.node-locality-delay>>> | |
-| | Number of missed scheduling opportunities after which the CapacityScheduler |
-| | attempts to schedule rack-local containers. Typically, this should be set to |
-| | number of nodes in the cluster. By default is setting approximately number |
-| | of nodes in one rack which is 40. Positive integer value is expected.|
-*--------------------------------------+--------------------------------------+
-
-
-  * Reviewing the configuration of the CapacityScheduler
-
-      Once the installation and configuration is completed, you can review it 
-      after starting the YARN cluster from the web-ui.
-
-    * Start the YARN cluster in the normal manner.
-
-    * Open the <<<ResourceManager>>> web UI.
-
-    * The </scheduler> web-page should show the resource usages of individual 
-      queues.
-      
-      []
-      
-* {Changing Queue Configuration}
-
-  Changing queue properties and adding new queues is very simple. You need to
-  edit <<conf/capacity-scheduler.xml>> and run <yarn rmadmin -refreshQueues>.
-  
-----
-$ vi $HADOOP_CONF_DIR/capacity-scheduler.xml
-$ $HADOOP_YARN_HOME/bin/yarn rmadmin -refreshQueues
-----  
-
-  <Note:> Queues cannot be <deleted>, only addition of new queues is supported -
-  the updated queue configuration should be a valid one i.e. queue-capacity at
-  each <level> should be equal to 100%.
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/DockerContainerExecutor.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/DockerContainerExecutor.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/DockerContainerExecutor.apt.vm
deleted file mode 100644
index db75de9..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/DockerContainerExecutor.apt.vm
+++ /dev/null
@@ -1,204 +0,0 @@
-
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Docker Container Executor
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Docker Container Executor
-
-%{toc|section=1|fromDepth=0}
-
-* {Overview}
-
-    Docker (https://www.docker.io/) combines an easy-to-use interface to
-Linux containers with easy-to-construct image files for those
-containers.  In short, Docker launches very light weight virtual
-machines.
-
-    The Docker Container Executor (DCE) allows the YARN NodeManager to
-launch YARN containers into Docker containers.  Users can specify the
-Docker images they want for their YARN containers.  These containers
-provide a custom software environment in which the user's code runs,
-isolated from the software environment of the NodeManager.  These
-containers can include special libraries needed by the application,
-and they can have different versions of Perl, Python, and even Java
-than what is installed on the NodeManager.  Indeed, these containers
-can run a different flavor of Linux than what is running on the
-NodeManager -- although the YARN container must define all the environments
- and libraries needed to run the job, nothing will be shared with the NodeManager.
-
-   Docker for YARN provides both consistency (all YARN containers will
-have the same software environment) and isolation (no interference
-with whatever is installed on the physical machine).
-  
-* {Cluster Configuration}
-
-    Docker Container Executor runs in non-secure mode of HDFS and
-YARN. It will not run in secure mode, and will exit if it detects
-secure mode.
-
-    The DockerContainerExecutor requires Docker daemon to be running on
-the NodeManagers, and the Docker client installed and able to start Docker
-containers.  To prevent timeouts while starting jobs, the Docker
-images to be used by a job should already be downloaded in the
-NodeManagers. Here's an example of how this can be done:
-
-----
-sudo docker pull sequenceiq/hadoop-docker:2.4.1
-----
-
-   This should be done as part of the NodeManager startup.
-
-   The following properties must be set in yarn-site.xml:
-
-----
-<property>
- <name>yarn.nodemanager.docker-container-executor.exec-name</name>
-  <value>/usr/bin/docker</value>
-  <description>
-     Name or path to the Docker client. This is a required parameter. If this is empty,
-     user must pass an image name as part of the job invocation(see below).
-  </description>
-</property>
-
-<property>
-  <name>yarn.nodemanager.container-executor.class</name>
-  <value>org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor</value>
-  <description>
-     This is the container executor setting that ensures that all
-jobs are started with the DockerContainerExecutor.
-  </description>
-</property>
-----
-
-   Administrators should be aware that DCE doesn't currently provide
-user name-space isolation.  This means, in particular, that software
-running as root in the YARN container will have root privileges in the
-underlying NodeManager.  Put differently, DCE currently provides no
-better security guarantees than YARN's Default Container Executor. In
-fact, DockerContainerExecutor will exit if it detects secure yarn.
-
-* {Tips for connecting to a secure docker repository}
-
-   By default, docker images are pulled from the docker public repository. The
-format of a docker image url is: <username>/<image_name>. For example,
-sequenceiq/hadoop-docker:2.4.1 is an image in docker public repository that contains java and
-hadoop.
-
-   If you want your own private repository, you provide the repository url instead of
-your username. Therefore, the image url becomes: <private_repo_url>/<image_name>.
-For example, if your repository is on localhost:8080, your images would be like:
- localhost:8080/hadoop-docker
-
-   To connect to a secure docker repository, you can use the following invocation:
-
-----
-docker login [OPTIONS] [SERVER]
-
-Register or log in to a Docker registry server, if no server is specified
-"https://index.docker.io/v1/" is the default.
-
--e, --email=""       Email
--p, --password=""    Password
--u, --username=""    Username
-----
-
-   If you want to login to a self-hosted registry you can specify this by adding
-the server name.
-
-----
-docker login <private_repo_url>
-----
-
-   This needs to be run as part of the NodeManager startup, or as a cron job if
-the login session expires periodically. You can login to multiple docker repositories
-from the same NodeManager, but all your users will have access to all your repositories,
-as at present the DockerContainerExecutor does not support per-job docker login.
-
-* {Job Configuration}
-
-   Currently you cannot configure any of the Docker settings with the job configuration.
-You can provide Mapper, Reducer, and ApplicationMaster environment overrides for the
-docker images, using the following 3 JVM properties respectively(only for MR jobs):
-
-  * mapreduce.map.env: You can override the mapper's image by passing
-    yarn.nodemanager.docker-container-executor.image-name=<your_image_name>
-    to this JVM property.
-
-  * mapreduce.reduce.env: You can override the reducer's image by passing
-    yarn.nodemanager.docker-container-executor.image-name=<your_image_name>
-    to this JVM property.
-
-  * yarn.app.mapreduce.am.env: You can override the ApplicationMaster's image
-    by passing yarn.nodemanager.docker-container-executor.image-name=<your_image_name>
-    to this JVM property.
-
-* {Docker Image requirements}
-
-   The Docker Images used for YARN containers must meet the following
-requirements:
-
-   The distro and version of Linux in your Docker Image can be quite different 
-from that of your NodeManager.  (Docker does have a few limitations in this 
-regard, but you're not likely to hit them.)  However, if you're using the 
-MapReduce framework, then your image will need to be configured for running 
-Hadoop. Java must be installed in the container, and the following environment variables
-must be defined in the image: JAVA_HOME, HADOOP_COMMON_PATH, HADOOP_HDFS_HOME,
-HADOOP_MAPRED_HOME, HADOOP_YARN_HOME, and HADOOP_CONF_DIR
-
-
-* {Working example of yarn launched docker containers.}
-
-  The following example shows how to run teragen using DockerContainerExecutor.
-
-  * First ensure that YARN is properly configured with DockerContainerExecutor(see above).
-
-----
-<property>
- <name>yarn.nodemanager.docker-container-executor.exec-name</name>
-  <value>docker -H=tcp://0.0.0.0:4243</value>
-  <description>
-     Name or path to the Docker client. The tcp socket must be
-     where docker daemon is listening.
-  </description>
-</property>
-
-<property>
-  <name>yarn.nodemanager.container-executor.class</name>
-  <value>org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor</value>
-  <description>
-     This is the container executor setting that ensures that all
-jobs are started with the DockerContainerExecutor.
-  </description>
-</property>
-----
-
-  * Pick a custom Docker image if you want. In this example, we'll use sequenceiq/hadoop-docker:2.4.1 from the
-docker hub repository. It has jdk, hadoop, and all the previously mentioned environment variables configured.
-
-  * Run:
-
-----
-hadoop jar $HADOOP_INSTALLATION_DIR/share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar \
-teragen \
--Dmapreduce.map.env="yarn.nodemanager.docker-container-executor.image-name=sequenceiq/hadoop-docker:2.4.1" \
--Dyarn.app.mapreduce.am.env="yarn.nodemanager.docker-container-executor.image-name=sequenceiq/hadoop-docker:2.4.1" \
-1000 \
-teragen_out_dir
-----
-
-  Once it succeeds, you can check the yarn debug logs to verify that docker indeed has launched containers.
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm
deleted file mode 100644
index 10de3e0..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/FairScheduler.apt.vm
+++ /dev/null
@@ -1,483 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Fair Scheduler
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop MapReduce Next Generation - Fair Scheduler
-
-%{toc|section=1|fromDepth=0}
-
-* {Purpose} 
-
-  This document describes the <<<FairScheduler>>>, a pluggable scheduler for Hadoop 
-  that allows YARN applications to share resources in large clusters fairly.
-
-* {Introduction}
-
-  Fair scheduling is a method of assigning resources to applications such that 
-  all apps get, on average, an equal share of resources over time.
-  Hadoop NextGen is capable of scheduling multiple resource types. By default,
-  the Fair Scheduler bases scheduling fairness decisions only on memory. It
-  can be configured to schedule with both memory and CPU, using the notion
-  of Dominant Resource Fairness developed by Ghodsi et al. When there is a
-  single app running, that app uses the entire cluster. When other apps are
-  submitted, resources that free up are assigned to the new apps, so that each
-  app eventually on gets roughly the same amount of resources. Unlike the default
-  Hadoop scheduler, which forms a queue of apps, this lets short apps finish in
-  reasonable time while not starving long-lived apps. It is also a reasonable way
-  to share a cluster between a number of users. Finally, fair sharing can also
-  work with app priorities - the priorities are used as weights to determine the 
-  fraction of total resources that each app should get.
-
-  The scheduler organizes apps further into "queues", and shares resources
-  fairly between these queues. By default, all users share a single queue,
-  named "default". If an app specifically lists a queue in a container resource
-  request, the request is submitted to that queue. It is also possible to assign
-  queues based on the user name included with the request through
-  configuration. Within each queue, a scheduling policy is used to share
-  resources between the running apps. The default is memory-based fair sharing,
-  but FIFO and multi-resource with Dominant Resource Fairness can also be
-  configured. Queues can be arranged in a hierarchy to divide resources and
-  configured with weights to share the cluster in specific proportions.
-
-  In addition to providing fair sharing, the Fair Scheduler allows assigning 
-  guaranteed minimum shares to queues, which is useful for ensuring that 
-  certain users, groups or production applications always get sufficient 
-  resources. When a queue contains apps, it gets at least its minimum share, 
-  but when the queue does not need its full guaranteed share, the excess is 
-  split between other running apps. This lets the scheduler guarantee capacity 
-  for queues while utilizing resources efficiently when these queues don't
-  contain applications.
-
-  The Fair Scheduler lets all apps run by default, but it is also possible to 
-  limit the number of running apps per user and per queue through the config 
-  file. This can be useful when a user must submit hundreds of apps at once, 
-  or in general to improve performance if running too many apps at once would 
-  cause too much intermediate data to be created or too much context-switching.
-  Limiting the apps does not cause any subsequently submitted apps to fail, 
-  only to wait in the scheduler's queue until some of the user's earlier apps 
-  finish. 
-
-* {Hierarchical queues with pluggable policies}
-
-  The fair scheduler supports hierarchical queues. All queues descend from a
-  queue named "root". Available resources are distributed among the children
-  of the root queue in the typical fair scheduling fashion. Then, the children
-  distribute the resources assigned to them to their children in the same
-  fashion.  Applications may only be scheduled on leaf queues. Queues can be
-  specified as children of other queues by placing them as sub-elements of 
-  their parents in the fair scheduler allocation file.
-  
-  A queue's name starts with the names of its parents, with periods as
-  separators. So a queue named "queue1" under the root queue, would be referred
-  to as "root.queue1", and a queue named "queue2" under a queue named "parent1"
-  would be referred to as "root.parent1.queue2". When referring to queues, the
-  root part of the name is optional, so queue1 could be referred to as just
-  "queue1", and a queue2 could be referred to as just "parent1.queue2".
-
-  Additionally, the fair scheduler allows setting a different custom policy for
-  each queue to allow sharing the queue's resources in any which way the user
-  wants. A custom policy can be built by extending
-  <<<org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.SchedulingPolicy>>>.
-  FifoPolicy, FairSharePolicy (default), and DominantResourceFairnessPolicy are
-  built-in and can be readily used.
-
-  Certain add-ons are not yet supported which existed in the original (MR1) 
-  Fair Scheduler. Among them, is the use of a custom policies governing 
-  priority "boosting" over  certain apps. 
-
-* {Automatically placing applications in queues}
-
-  The Fair Scheduler allows administrators to configure policies that
-  automatically place submitted applications into appropriate queues. Placement
-  can depend on the user and groups of the submitter and the requested queue
-  passed by the application. A policy consists of a set of rules that are applied
-  sequentially to classify an incoming application. Each rule either places the
-  app into a queue, rejects it, or continues on to the next rule. Refer to the
-  allocation file format below for how to configure these policies.
-
-* {Installation}
-
-  To use the Fair Scheduler first assign the appropriate scheduler class in 
-  yarn-site.xml:
-
-------
-<property>
-  <name>yarn.resourcemanager.scheduler.class</name>
-  <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
-</property>
-------
-
-* {Configuration}
-
-  Customizing the Fair Scheduler typically involves altering two files. First, 
-  scheduler-wide options can be set by adding configuration properties in the 
-  yarn-site.xml file in your existing configuration directory. Second, in 
-  most cases users will want to create an allocation file listing which queues 
-  exist and their respective weights and capacities. The allocation file
-  is reloaded every 10 seconds, allowing changes to be made on the fly.
-
-Properties that can be placed in yarn-site.xml
-
- * <<<yarn.scheduler.fair.allocation.file>>>
-
-   * Path to allocation file. An allocation file is an XML manifest describing
-     queues and their properties, in addition to certain policy defaults. This file
-     must be in the XML format described in the next section. If a relative path is
-     given, the file is searched for on the classpath (which typically includes
-     the Hadoop conf directory).
-     Defaults to fair-scheduler.xml.
-
- * <<<yarn.scheduler.fair.user-as-default-queue>>>
-
-    * Whether to use the username associated with the allocation as the default 
-      queue name, in the event that a queue name is not specified. If this is set 
-      to "false" or unset, all jobs have a shared default queue, named "default".
-      Defaults to true.  If a queue placement policy is given in the allocations
-      file, this property is ignored.
-
- * <<<yarn.scheduler.fair.preemption>>>
-
-    * Whether to use preemption. Defaults to false.
-
- * <<<yarn.scheduler.fair.preemption.cluster-utilization-threshold>>>
-
-    * The utilization threshold after which preemption kicks in. The
-      utilization is computed as the maximum ratio of usage to capacity among
-      all resources. Defaults to 0.8f.
-
- * <<<yarn.scheduler.fair.sizebasedweight>>>
-  
-    * Whether to assign shares to individual apps based on their size, rather than
-      providing an equal share to all apps regardless of size. When set to true,
-      apps are weighted by the natural logarithm of one plus the app's total
-      requested memory, divided by the natural logarithm of 2. Defaults to false.
-
- * <<<yarn.scheduler.fair.assignmultiple>>>
-
-    * Whether to allow multiple container assignments in one heartbeat. Defaults
-      to false.
-
- * <<<yarn.scheduler.fair.max.assign>>>
-
-    * If assignmultiple is true, the maximum amount of containers that can be
-      assigned in one heartbeat. Defaults to -1, which sets no limit.
-
- * <<<yarn.scheduler.fair.locality.threshold.node>>>
-
-    * For applications that request containers on particular nodes, the number of
-      scheduling opportunities since the last container assignment to wait before
-      accepting a placement on another node. Expressed as a float between 0 and 1,
-      which, as a fraction of the cluster size, is the number of scheduling
-      opportunities to pass up. The default value of -1.0 means don't pass up any
-      scheduling opportunities.
-
- * <<<yarn.scheduler.fair.locality.threshold.rack>>>
-
-    * For applications that request containers on particular racks, the number of
-      scheduling opportunities since the last container assignment to wait before
-      accepting a placement on another rack. Expressed as a float between 0 and 1,
-      which, as a fraction of the cluster size, is the number of scheduling
-      opportunities to pass up. The default value of -1.0 means don't pass up any
-      scheduling opportunities.
-
- * <<<yarn.scheduler.fair.allow-undeclared-pools>>>
-
-    * If this is true, new queues can be created at application submission time,
-      whether because they are specified as the application's queue by the
-      submitter or because they are placed there by the user-as-default-queue
-      property. If this is false, any time an app would be placed in a queue that
-      is not specified in the allocations file, it is placed in the "default" queue
-      instead. Defaults to true. If a queue placement policy is given in the
-      allocations file, this property is ignored.
-
- * <<<yarn.scheduler.fair.update-interval-ms>>>
- 
-    * The interval at which to lock the scheduler and recalculate fair shares,
-      recalculate demand, and check whether anything is due for preemption.
-      Defaults to 500 ms. 
-
-Allocation file format
-
-  The allocation file must be in XML format. The format contains five types of
-  elements:
-
- * <<Queue elements>>, which represent queues. Queue elements can take an optional
-   attribute 'type', which when set to 'parent' makes it a parent queue. This is useful
-   when we want to create a parent queue without configuring any leaf queues.
-   Each queue element may contain the following properties:
-
-   * minResources: minimum resources the queue is entitled to, in the form
-     "X mb, Y vcores". For the single-resource fairness policy, the vcores
-     value is ignored. If a queue's minimum share is not satisfied, it will be
-     offered available resources before any other queue under the same parent.
-     Under the single-resource fairness policy, a queue
-     is considered unsatisfied if its memory usage is below its minimum memory
-     share. Under dominant resource fairness, a queue is considered unsatisfied
-     if its usage for its dominant resource with respect to the cluster capacity
-     is below its minimum share for that resource. If multiple queues are
-     unsatisfied in this situation, resources go to the queue with the smallest
-     ratio between relevant resource usage and minimum. Note that it is
-     possible that a queue that is below its minimum may not immediately get up
-     to its minimum when it submits an application, because already-running jobs
-     may be using those resources.
-
-   * maxResources: maximum resources a queue is allowed, in the form
-     "X mb, Y vcores". For the single-resource fairness policy, the vcores
-     value is ignored. A queue will never be assigned a container that would
-     put its aggregate usage over this limit.
-
-   * maxRunningApps: limit the number of apps from the queue to run at once
-
-   * maxAMShare: limit the fraction of the queue's fair share that can be used
-     to run application masters. This property can only be used for leaf queues.
-     For example, if set to 1.0f, then AMs in the leaf queue can take up to 100%
-     of both the memory and CPU fair share. The value of -1.0f will disable
-     this feature and the amShare will not be checked. The default value is 0.5f.
-
-   * weight: to share the cluster non-proportionally with other queues. Weights
-     default to 1, and a queue with weight 2 should receive approximately twice
-     as many resources as a queue with the default weight.
-
-   * schedulingPolicy: to set the scheduling policy of any queue. The allowed
-     values are "fifo"/"fair"/"drf" or any class that extends
-     <<<org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.SchedulingPolicy>>>. 
-     Defaults to "fair". If "fifo", apps with earlier submit times are given preference
-     for containers, but apps submitted later may run concurrently if there is
-     leftover space on the cluster after satisfying the earlier app's requests.
-
-   * aclSubmitApps: a list of users and/or groups that can submit apps to the
-     queue. Refer to the ACLs section below for more info on the format of this
-     list and how queue ACLs work.
-
-   * aclAdministerApps: a list of users and/or groups that can administer a
-     queue.  Currently the only administrative action is killing an application.
-     Refer to the ACLs section below for more info on the format of this list
-     and how queue ACLs work.
-
-   * minSharePreemptionTimeout: number of seconds the queue is under its minimum share
-     before it will try to preempt containers to take resources from other queues.
-     If not set, the queue will inherit the value from its parent queue.
-
-   * fairSharePreemptionTimeout: number of seconds the queue is under its fair share
-     threshold before it will try to preempt containers to take resources from other
-     queues. If not set, the queue will inherit the value from its parent queue.
-
-   * fairSharePreemptionThreshold: the fair share preemption threshold for the
-     queue. If the queue waits fairSharePreemptionTimeout without receiving
-     fairSharePreemptionThreshold*fairShare resources, it is allowed to preempt
-     containers to take resources from other queues. If not set, the queue will
-     inherit the value from its parent queue.
-
- * <<User elements>>, which represent settings governing the behavior of individual 
-     users. They can contain a single property: maxRunningApps, a limit on the 
-     number of running apps for a particular user.
-
- * <<A userMaxAppsDefault element>>, which sets the default running app limit 
-   for any users whose limit is not otherwise specified.
-
- * <<A defaultFairSharePreemptionTimeout element>>, which sets the fair share
-   preemption timeout for the root queue; overridden by fairSharePreemptionTimeout
-   element in root queue.
-
- * <<A defaultMinSharePreemptionTimeout element>>, which sets the min share
-   preemption timeout for the root queue; overridden by minSharePreemptionTimeout
-   element in root queue.
-
- * <<A defaultFairSharePreemptionThreshold element>>, which sets the fair share
-   preemption threshold for the root queue; overridden by fairSharePreemptionThreshold
-   element in root queue.
-
- * <<A queueMaxAppsDefault element>>, which sets the default running app limit
-   for queues; overriden by maxRunningApps element in each queue.
-
- * <<A queueMaxAMShareDefault element>>, which sets the default AM resource
-   limit for queue; overriden by maxAMShare element in each queue.
-
- * <<A defaultQueueSchedulingPolicy element>>, which sets the default scheduling
-   policy for queues; overriden by the schedulingPolicy element in each queue
-   if specified. Defaults to "fair".
-
- * <<A queuePlacementPolicy element>>, which contains a list of rule elements
-   that tell the scheduler how to place incoming apps into queues. Rules
-   are applied in the order that they are listed. Rules may take arguments. All
-   rules accept the "create" argument, which indicates whether the rule can create
-   a new queue. "Create" defaults to true; if set to false and the rule would
-   place the app in a queue that is not configured in the allocations file, we
-   continue on to the next rule. The last rule must be one that can never issue a
-   continue.  Valid rules are:
-
-     * specified: the app is placed into the queue it requested.  If the app
-       requested no queue, i.e. it specified "default", we continue. If the app
-       requested a queue name starting or ending with period, i.e. names like
-       ".q1" or "q1." will be rejected.
-
-     * user: the app is placed into a queue with the name of the user who
-       submitted it. Periods in the username will be replace with "_dot_",
-       i.e. the queue name for user "first.last" is "first_dot_last".
-
-     * primaryGroup: the app is placed into a queue with the name of the
-       primary group of the user who submitted it. Periods in the group name
-       will be replaced with "_dot_", i.e. the queue name for group "one.two"
-       is "one_dot_two".
-
-     * secondaryGroupExistingQueue: the app is placed into a queue with a name
-       that matches a secondary group of the user who submitted it. The first
-       secondary group that matches a configured queue will be selected.
-       Periods in group names will be replaced with "_dot_", i.e. a user with
-       "one.two" as one of their secondary groups would be placed into the
-       "one_dot_two" queue, if such a queue exists.
-
-     * nestedUserQueue : the app is placed into a queue with the name of the user
-       under the queue suggested by the nested rule. This is similar to ‘user’
-       rule,the difference being in 'nestedUserQueue' rule,user queues can be created 
-       under any parent queue, while 'user' rule creates user queues only under root queue.
-       Note that nestedUserQueue rule would be applied only if the nested rule returns a 
-       parent queue.One can configure a parent queue either by setting 'type' attribute of queue
-       to 'parent' or by configuring at least one leaf under that queue which makes it a parent.
-       See example allocation for a sample use case. 
-
-     * default: the app is placed into the queue specified in the 'queue' attribute of the 
-       default rule. If 'queue' attribute is not specified, the app is placed into 'root.default' queue.
-
-     * reject: the app is rejected.
-
-  An example allocation file is given here:
-
----
-<?xml version="1.0"?>
-<allocations>
-  <queue name="sample_queue">
-    <minResources>10000 mb,0vcores</minResources>
-    <maxResources>90000 mb,0vcores</maxResources>
-    <maxRunningApps>50</maxRunningApps>
-    <maxAMShare>0.1</maxAMShare>
-    <weight>2.0</weight>
-    <schedulingPolicy>fair</schedulingPolicy>
-    <queue name="sample_sub_queue">
-      <aclSubmitApps>charlie</aclSubmitApps>
-      <minResources>5000 mb,0vcores</minResources>
-    </queue>
-  </queue>
-
-  <queueMaxAMShareDefault>0.5</queueMaxAMShareDefault>
-
-  <!-- Queue 'secondary_group_queue' is a parent queue and may have
-       user queues under it -->
-  <queue name="secondary_group_queue" type="parent">
-  <weight>3.0</weight>
-  </queue>
-  
-  <user name="sample_user">
-    <maxRunningApps>30</maxRunningApps>
-  </user>
-  <userMaxAppsDefault>5</userMaxAppsDefault>
-  
-  <queuePlacementPolicy>
-    <rule name="specified" />
-    <rule name="primaryGroup" create="false" />
-    <rule name="nestedUserQueue">
-        <rule name="secondaryGroupExistingQueue" create="false" />
-    </rule>
-    <rule name="default" queue="sample_queue"/>
-  </queuePlacementPolicy>
-</allocations>
----
-
-  Note that for backwards compatibility with the original FairScheduler, "queue" elements can instead be named as "pool" elements.
-
-
-Queue Access Control Lists (ACLs)
-
-  Queue Access Control Lists (ACLs) allow administrators to control who may
-  take actions on particular queues. They are configured with the aclSubmitApps
-  and aclAdministerApps properties, which can be set per queue. Currently the
-  only supported administrative action is killing an application. Anybody who
-  may administer a queue may also submit applications to it. These properties
-  take values in a format like "user1,user2 group1,group2" or " group1,group2".
-  An action on a queue will be permitted if its user or group is in the ACL of
-  that queue or in the ACL of any of that queue's ancestors. So if queue2
-  is inside queue1, and user1 is in queue1's ACL, and user2 is in queue2's
-  ACL, then both users may submit to queue2.
-
-  <<Note:>> The delimiter is a space character. To specify only ACL groups, begin the 
-  value with a space character. 
-  
-  The root queue's ACLs are "*" by default which, because ACLs are passed down,
-  means that everybody may submit to and kill applications from every queue.
-  To start restricting access, change the root queue's ACLs to something other
-  than "*". 
-
-  
-* {Administration}
-
-  The fair scheduler provides support for administration at runtime through a few mechanisms:
-
-Modifying configuration at runtime
-
-  It is possible to modify minimum shares, limits, weights, preemption timeouts
-  and queue scheduling policies at runtime by editing the allocation file. The
-  scheduler will reload this file 10-15 seconds after it sees that it was
-  modified.
-
-Monitoring through web UI
-
-  Current applications, queues, and fair shares can be examined through the
-  ResourceManager's web interface, at
-  http://<ResourceManager URL>/cluster/scheduler.
-
-  The following fields can be seen for each queue on the web interface:
-  
- * Used Resources - The sum of resources allocated to containers within the queue. 
-
- * Num Active Applications - The number of applications in the queue that have
-   received at least one container.
- 
- * Num Pending Applications - The number of applications in the queue that have
-   not yet received any containers.
-
- * Min Resources - The configured minimum resources that are guaranteed to the queue.
-  	
- * Max Resources - The configured maximum resources that are allowed to the queue.
- 
- * Instantaneous Fair Share - The queue's instantaneous fair share of resources.
-   These shares consider only actives queues (those with running applications),
-   and are used for scheduling decisions. Queues may be allocated resources
-   beyond their shares when other queues aren't using them. A queue whose
-   resource consumption lies at or below its instantaneous fair share will never
-   have its containers preempted.
-
- * Steady Fair Share - The queue's steady fair share of resources. These shares
-   consider all the queues irrespective of whether they are active (have
-   running applications) or not. These are computed less frequently and
-   change only when the configuration or capacity changes.They are meant to
-   provide visibility into resources the user can expect, and hence displayed
-   in the Web UI.
-
-Moving applications between queues
-
-  The Fair Scheduler supports moving a running application to a different queue.
-  This can be useful for moving an important application to a higher priority
-  queue, or for moving an unimportant application to a lower priority queue.
-  Apps can be moved by running "yarn application -movetoqueue appID -queue
-  targetQueueName".
-  
-  When an application is moved to a queue, its existing allocations become
-  counted with the new queue's allocations instead of the old for purposes
-  of determining fairness. An attempt to move an application to a queue will
-  fail if the addition of the app's resources to that queue would violate the
-  its maxRunningApps or maxResources constraints.
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManager.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManager.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManager.apt.vm
deleted file mode 100644
index 9ee942f..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManager.apt.vm
+++ /dev/null
@@ -1,64 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  NodeManager Overview.
-  ---
-  ---
-  ${maven.build.timestamp}
-
-NodeManager Overview.
-
-%{toc|section=1|fromDepth=0|toDepth=2}
-
-* Overview
-
-  The NodeManager is responsible for launching and managing containers on a node. Containers execute tasks as specified by the AppMaster.
-  
-* Health checker service
-
-  The NodeManager runs services to determine the health of the node it is executing on. The services perform checks on the disk as well as any user specified tests. If any health check fails, the NodeManager marks the node as unhealthy and communicates this to the ResourceManager, which then stops assigning containers to the node. Communication of the node status is done as part of the heartbeat between the NodeManager and the ResourceManager. The intervals at which the disk checker and health monitor(described below) run don't affect the heartbeat intervals. When the heartbeat takes place, the status of both checks is used to determine the health of the node.
-
-  ** Disk checker
-
-    The disk checker checks the state of the disks that the NodeManager is configured to use(local-dirs and log-dirs, configured using yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs respectively). The checks include permissions and free disk space. It also checks that the filesystem isn't in a read-only state. The checks are run at 2 minute intervals by default but can be configured to run as often as the user desires. If a disk fails the check, the NodeManager stops using that particular disk but still reports the node status as healthy. However if a number of disks fail the check(the number can be configured, as explained below), then the node is reported as unhealthy to the ResourceManager and new containers will not be assigned to the node. In addition, once a disk is marked as unhealthy, the NodeManager stops checking it to see if it has recovered(e.g. disk became full and was then cleaned up). The only way for the NodeManager to use that disk to restart the software
  on the node. The following configuration parameters can be used to modify the disk checks:
-
-*------------------+----------------+------------------+
-|| Configuration name || Allowed Values || Description |
-*------------------+----------------+------------------+
-| yarn.nodemanager.disk-health-checker.enable | true, false | Enable or disable the disk health checker service |
-*------------------+----------------+------------------+
-| yarn.nodemanager.disk-health-checker.interval-ms | Positive integer | The interval, in milliseconds, at which the disk checker should run; the default value is 2 minutes |
-*------------------+----------------+------------------+
-| yarn.nodemanager.disk-health-checker.min-healthy-disks | Float between 0-1 | The minimum fraction of disks that must pass the check for the NodeManager to mark the node as healthy; the default is 0.25 |
-*------------------+----------------+------------------+
-| yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage | Float between 0-100 | The maximum percentage of disk space that may be utilized before a disk is marked as unhealthy by the disk checker service. This check is run for every disk used by the NodeManager. The default value is 100 i.e. the entire disk can be used. |
-*------------------+----------------+------------------+
-| yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb | Integer | The minimum amount of free space that must be available on the disk for the disk checker service to mark the disk as healthy. This check is run for every disk used by the NodeManager. The default value is 0 i.e. the entire disk can be used. |
-*------------------+----------------+------------------+
-
- ** External health script
-
-    Users may specify their own health checker script that will be invoked by the health checker service. Users may specify a timeout as well as options to be passed to the script. If the script exits with a non-zero exit code, times out or results in an exception being thrown, the node is marked as unhealthy. Please note that if the script cannot be executed due to permissions or an incorrect path, etc, then it counts as a failure and the node will be reported as unhealthy. Please note that speifying a health check script is not mandatory. If no script is specified, only the disk checker status will be used to determine the health of the node. The following configuration parameters can be used to set the health script:
-
-*------------------+----------------+------------------+
-|| Configuration name || Allowed Values || Description |
-*------------------+----------------+------------------+
-| yarn.nodemanager.health-checker.interval-ms | Postive integer | The interval, in milliseconds, at which health checker service runs; the default value is 10 minutes. |
-*------------------+----------------+------------------+
-| yarn.nodemanager.health-checker.script.timeout-ms | Postive integer | The timeout for the health script that's executed; the default value is 20 minutes. |
-*------------------+----------------+------------------+
-| yarn.nodemanager.health-checker.script.path | String | Absolute path to the health check script to be run. |
-*------------------+----------------+------------------+
-| yarn.nodemanager.health-checker.script.opts | String | Arguments to be passed to the script when the script is executed. |
-*------------------+----------------+------------------+
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerCgroups.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerCgroups.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerCgroups.apt.vm
deleted file mode 100644
index f228e3b..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerCgroups.apt.vm
+++ /dev/null
@@ -1,77 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Using CGroups with YARN
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Using CGroups with YARN
-
-%{toc|section=1|fromDepth=0|toDepth=2}
-
- CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage. Currently, CGroups is only used for limiting CPU usage.
-
-* CGroups configuration
-
- The config variables related to using CGroups are the following:
-
- The following settings are related to setting up CGroups. All of these need to be set in yarn-site.xml.
-
-  [[1]] yarn.nodemanager.container-executor.class
-
-    This should be set to "org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor". CGroups is a Linux kernel feature and is exposed via the LinuxContainerExecutor.
-
-  [[2]] yarn.nodemanager.linux-container-executor.resources-handler.class
-
-    This should be set to "org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler".Using the LinuxContainerExecutor doesn't force you to use CGroups. If you wish to use CGroups, the resource-handler-class must be set to CGroupsLCEResourceHandler.
-
-  [[3]] yarn.nodemanager.linux-container-executor.cgroups.hierarchy
-
-    The cgroups hierarchy under which to place YARN proccesses(cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured), then this cgroups hierarchy must already exist
-
-  [[4]] yarn.nodemanager.linux-container-executor.cgroups.mount
-
-    Whether the LCE should attempt to mount cgroups if not found - can be true or false
-
-  [[5]] yarn.nodemanager.linux-container-executor.cgroups.mount-path
-
-    Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. A point to note here is that the container-executor binary will try to mount the path specified + "/" + the subsystem. In our case, since we are trying to limit CPU the binary tries to mount the path specified + "/cpu" and that's the path it expects to exist.
-
-  [[6]] yarn.nodemanager.linux-container-executor.group
-
-    The Unix group of the NodeManager. It should match the setting in "container-executor.cfg". This configuration is required for validating the secure access of the container-executor binary.
-
- The following settings are related to limiting resource usage of YARN containers
-
-  [[1]] yarn.nodemanager.resource.percentage-physical-cpu-limit
-
-    This setting lets you limit the cpu usage of all YARN containers. It sets a hard upper limit on the cumulative CPU usage of the containers. For example, if set to 60, the combined CPU usage of all YARN containers will not exceed 60%.
-
-  [[2]] yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage
-
-    CGroups allows cpu usage limits to be hard or soft. When this setting is true, containers cannot use more CPU usage than allocated even if spare CPU is available. This ensures that containers can only use CPU that they were allocated. When set to false, containers can use spare CPU if available. It should be noted that irrespective of whether set to true or false, at no time can the combined CPU usage of all containers exceed the value specified in "yarn.nodemanager.resource.percentage-physical-cpu-limit".
-
-* CGroups and security
-
- CGroups itself has no requirements related to security. However, the LinuxContainerExecutor does have some requirements. If running in non-secure mode, by default, the LCE runs all jobs as user "nobody". This user can be changed by setting "yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user" to the desired user. However, it can also be configured to run jobs as the user submitting the job. In that case "yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users" should be set to false.
-
-*-----------+-----------+---------------------------+
-|| yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user || yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users || User running jobs |
-*-----------+-----------+---------------------------+
-| (default) | (default) | nobody                    |
-*-----------+-----------+---------------------------+
-| yarn      | (default) | yarn                      |
-*-----------+-----------+---------------------------+
-| yarn      | false     | (User submitting the job) |
-*-----------+-----------+---------------------------+


[26/43] hadoop git commit: HDFS-7785. Improve diagnostics information for HttpPutFailedException. Contributed by Chengbing Liu.

Posted by zj...@apache.org.
HDFS-7785. Improve diagnostics information for HttpPutFailedException. Contributed by Chengbing Liu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/c5eac9c6
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/c5eac9c6
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/c5eac9c6

Branch: refs/heads/YARN-2928
Commit: c5eac9c6fe937ff32f4efed89d34c06974fac4d6
Parents: 5d0bae5
Author: Haohui Mai <wh...@apache.org>
Authored: Mon Mar 2 15:35:02 2015 -0800
Committer: Haohui Mai <wh...@apache.org>
Committed: Mon Mar 2 15:35:02 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt                      | 3 +++
 .../org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java  | 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5eac9c6/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index d5208da..43505d7 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1064,6 +1064,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-6753. Initialize checkDisk when DirectoryScanner not able to get
     files list for scanning (J.Andreina via vinayakumarb)
 
+    HDFS-7785. Improve diagnostics information for HttpPutFailedException.
+    (Chengbing Liu via wheat9)
+
     BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
       HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/c5eac9c6/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
index c1e9d7f..0d32758 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/TransferFsImage.java
@@ -291,7 +291,9 @@ public class TransferFsImage {
 
       int responseCode = connection.getResponseCode();
       if (responseCode != HttpURLConnection.HTTP_OK) {
-        throw new HttpPutFailedException(connection.getResponseMessage(),
+        throw new HttpPutFailedException(String.format(
+            "Image uploading failed, status: %d, url: %s, message: %s",
+            responseCode, urlWithParams, connection.getResponseMessage()),
             responseCode);
       }
     } catch (AuthenticationException e) {


[35/43] hadoop git commit: HADOOP-11602. Fix toUpperCase/toLowerCase to use Locale.ENGLISH. (ozawa)

Posted by zj...@apache.org.
HADOOP-11602. Fix toUpperCase/toLowerCase to use Locale.ENGLISH. (ozawa)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d1c6accb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d1c6accb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d1c6accb

Branch: refs/heads/YARN-2928
Commit: d1c6accb6f87b08975175580e15f1ff1fe29ab04
Parents: b442aee
Author: Tsuyoshi Ozawa <oz...@apache.org>
Authored: Tue Mar 3 14:12:34 2015 +0900
Committer: Tsuyoshi Ozawa <oz...@apache.org>
Committed: Tue Mar 3 14:17:52 2015 +0900

----------------------------------------------------------------------
 .../classification/tools/StabilityOptions.java  |  5 ++-
 .../AltKerberosAuthenticationHandler.java       |  6 ++-
 .../authentication/util/TestKerberosUtil.java   | 14 ++++---
 hadoop-common-project/hadoop-common/CHANGES.txt |  2 +
 .../org/apache/hadoop/conf/Configuration.java   |  6 +--
 .../org/apache/hadoop/crypto/CipherSuite.java   |  3 +-
 .../hadoop/crypto/key/JavaKeyStoreProvider.java |  3 +-
 .../java/org/apache/hadoop/fs/FileSystem.java   |  7 +++-
 .../java/org/apache/hadoop/fs/StorageType.java  |  3 +-
 .../apache/hadoop/fs/permission/AclEntry.java   |  5 ++-
 .../apache/hadoop/fs/shell/XAttrCommands.java   |  2 +-
 .../org/apache/hadoop/fs/shell/find/Name.java   |  5 ++-
 .../io/compress/CompressionCodecFactory.java    |  7 ++--
 .../hadoop/metrics2/impl/MetricsConfig.java     |  7 ++--
 .../hadoop/metrics2/impl/MetricsSystemImpl.java |  5 ++-
 .../hadoop/security/SaslPropertiesResolver.java |  3 +-
 .../apache/hadoop/security/SecurityUtil.java    | 12 +++---
 .../hadoop/security/WhitelistBasedResolver.java |  3 +-
 .../security/ssl/FileBasedKeyStoresFactory.java |  4 +-
 .../apache/hadoop/security/ssl/SSLFactory.java  |  5 ++-
 .../security/ssl/SSLHostnameVerifier.java       | 10 +++--
 .../DelegationTokenAuthenticationHandler.java   |  3 +-
 .../web/DelegationTokenAuthenticator.java       |  3 +-
 .../apache/hadoop/util/ComparableVersion.java   |  3 +-
 .../org/apache/hadoop/util/StringUtils.java     | 40 +++++++++++++++++++-
 .../hadoop/fs/FileSystemContractBaseTest.java   |  4 +-
 .../java/org/apache/hadoop/ipc/TestIPC.java     |  2 +-
 .../java/org/apache/hadoop/ipc/TestSaslRPC.java |  2 +-
 .../hadoop/security/TestSecurityUtil.java       | 10 +++--
 .../security/TestUserGroupInformation.java      |  5 ++-
 .../hadoop/test/TimedOutTestsListener.java      |  6 ++-
 .../org/apache/hadoop/util/TestStringUtils.java | 21 ++++++++++
 .../org/apache/hadoop/util/TestWinUtils.java    |  6 ++-
 .../java/org/apache/hadoop/nfs/NfsExports.java  |  5 ++-
 .../server/CheckUploadContentTypeFilter.java    |  4 +-
 .../hadoop/fs/http/server/FSOperations.java     |  7 +++-
 .../http/server/HttpFSParametersProvider.java   |  4 +-
 .../org/apache/hadoop/lib/server/Server.java    |  3 +-
 .../service/hadoop/FileSystemAccessService.java |  6 ++-
 .../org/apache/hadoop/lib/wsrs/EnumParam.java   |  2 +-
 .../apache/hadoop/lib/wsrs/EnumSetParam.java    |  3 +-
 .../hadoop/lib/wsrs/ParametersProvider.java     |  3 +-
 .../org/apache/hadoop/hdfs/XAttrHelper.java     | 19 ++++++----
 .../hadoop/hdfs/protocol/HdfsConstants.java     |  3 +-
 .../BlockStoragePolicySuite.java                |  4 +-
 .../hdfs/server/common/HdfsServerConstants.java |  5 ++-
 .../hdfs/server/datanode/StorageLocation.java   |  4 +-
 .../hdfs/server/namenode/FSEditLogOp.java       |  3 +-
 .../namenode/QuotaByStorageTypeEntry.java       |  3 +-
 .../hdfs/server/namenode/SecondaryNameNode.java |  2 +-
 .../org/apache/hadoop/hdfs/tools/GetConf.java   | 17 +++++----
 .../OfflineEditsVisitorFactory.java             |  7 ++--
 .../offlineImageViewer/FSImageHandler.java      |  4 +-
 .../org/apache/hadoop/hdfs/web/AuthFilter.java  |  3 +-
 .../org/apache/hadoop/hdfs/web/ParamFilter.java |  3 +-
 .../hadoop/hdfs/web/WebHdfsFileSystem.java      |  5 ++-
 .../hadoop/hdfs/web/resources/EnumParam.java    |  3 +-
 .../hadoop/hdfs/web/resources/EnumSetParam.java |  3 +-
 .../namenode/snapshot/TestSnapshotManager.java  |  6 +--
 .../jobhistory/JobHistoryEventHandler.java      |  3 +-
 .../mapreduce/v2/app/webapp/AppController.java  |  6 +--
 .../apache/hadoop/mapreduce/TypeConverter.java  |  3 +-
 .../apache/hadoop/mapreduce/v2/util/MRApps.java |  4 +-
 .../hadoop/mapreduce/TestTypeConverter.java     |  6 ++-
 .../java/org/apache/hadoop/mapred/Task.java     |  2 +-
 .../counters/FileSystemCounterGroup.java        |  4 +-
 .../mapreduce/filecache/DistributedCache.java   |  4 +-
 .../hadoop/mapreduce/lib/db/DBInputFormat.java  |  5 ++-
 .../org/apache/hadoop/mapreduce/tools/CLI.java  |  9 +++--
 .../java/org/apache/hadoop/fs/TestDFSIO.java    | 18 ++++-----
 .../org/apache/hadoop/fs/TestFileSystem.java    |  4 +-
 .../org/apache/hadoop/fs/slive/Constants.java   |  6 ++-
 .../apache/hadoop/fs/slive/OperationData.java   |  3 +-
 .../apache/hadoop/fs/slive/OperationOutput.java |  4 +-
 .../org/apache/hadoop/fs/slive/SliveTest.java   |  3 +-
 .../java/org/apache/hadoop/io/FileBench.java    | 17 +++++----
 .../org/apache/hadoop/mapred/TestMapRed.java    |  3 +-
 .../apache/hadoop/examples/DBCountPageView.java |  2 +-
 .../plugin/versioninfo/VersionInfoMojo.java     |  4 +-
 .../fs/azure/AzureNativeFileSystemStore.java    |  4 +-
 .../apache/hadoop/tools/util/DistCpUtils.java   | 12 ++++--
 .../java/org/apache/hadoop/tools/DistCpV1.java  |  4 +-
 .../gridmix/GridmixJobSubmissionPolicy.java     |  3 +-
 .../TestSwiftFileSystemExtendedContract.java    |  4 +-
 .../hadoop/tools/rumen/HadoopLogsAnalyzer.java  | 33 ++++++++--------
 .../apache/hadoop/tools/rumen/JobBuilder.java   |  2 +-
 .../apache/hadoop/tools/rumen/LoggedTask.java   |  3 +-
 .../hadoop/tools/rumen/LoggedTaskAttempt.java   |  3 +-
 .../apache/hadoop/streaming/Environment.java    |  3 +-
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |  7 ++--
 .../apache/hadoop/yarn/client/cli/NodeCLI.java  |  3 +-
 .../impl/pb/GetApplicationsRequestPBImpl.java   |  6 ++-
 .../pb/ApplicationSubmissionContextPBImpl.java  |  3 +-
 .../org/apache/hadoop/yarn/util/FSDownload.java |  6 +--
 .../hadoop/yarn/webapp/hamlet/HamletGen.java    |  6 +--
 .../registry/client/binding/RegistryUtils.java  |  3 +-
 .../webapp/AHSWebServices.java                  |  4 +-
 .../timeline/webapp/TimelineWebServices.java    |  3 +-
 .../hadoop/yarn/server/webapp/WebServices.java  | 18 +++++----
 .../server/resourcemanager/ClientRMService.java |  3 +-
 .../resource/ResourceWeights.java               |  3 +-
 .../CapacitySchedulerConfiguration.java         |  4 +-
 .../fair/FairSchedulerConfiguration.java        |  3 +-
 .../scheduler/fair/SchedulingPolicy.java        |  3 +-
 .../resourcemanager/webapp/NodesPage.java       |  2 +-
 .../resourcemanager/webapp/RMWebServices.java   | 20 ++++++----
 106 files changed, 407 insertions(+), 224 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java b/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
index dbce31e..657dbce 100644
--- a/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
+++ b/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/tools/StabilityOptions.java
@@ -21,6 +21,7 @@ import com.sun.javadoc.DocErrorReporter;
 
 import java.util.ArrayList;
 import java.util.List;
+import java.util.Locale;
 
 class StabilityOptions {
   public static final String STABLE_OPTION = "-stable";
@@ -28,7 +29,7 @@ class StabilityOptions {
   public static final String UNSTABLE_OPTION = "-unstable";
 
   public static Integer optionLength(String option) {
-    String opt = option.toLowerCase();
+    String opt = option.toLowerCase(Locale.ENGLISH);
     if (opt.equals(UNSTABLE_OPTION)) return 1;
     if (opt.equals(EVOLVING_OPTION)) return 1;
     if (opt.equals(STABLE_OPTION)) return 1;
@@ -38,7 +39,7 @@ class StabilityOptions {
   public static void validOptions(String[][] options,
       DocErrorReporter reporter) {
     for (int i = 0; i < options.length; i++) {
-      String opt = options[i][0].toLowerCase();
+      String opt = options[i][0].toLowerCase(Locale.ENGLISH);
       if (opt.equals(UNSTABLE_OPTION)) {
 	RootDocProcessor.stability = UNSTABLE_OPTION;
       } else if (opt.equals(EVOLVING_OPTION)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java
index 987330f..dae3b50 100644
--- a/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java
+++ b/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AltKerberosAuthenticationHandler.java
@@ -14,6 +14,7 @@
 package org.apache.hadoop.security.authentication.server;
 
 import java.io.IOException;
+import java.util.Locale;
 import java.util.Properties;
 import javax.servlet.ServletException;
 import javax.servlet.http.HttpServletRequest;
@@ -68,7 +69,8 @@ public abstract class AltKerberosAuthenticationHandler
             NON_BROWSER_USER_AGENTS, NON_BROWSER_USER_AGENTS_DEFAULT)
             .split("\\W*,\\W*");
     for (int i = 0; i < nonBrowserUserAgents.length; i++) {
-        nonBrowserUserAgents[i] = nonBrowserUserAgents[i].toLowerCase();
+        nonBrowserUserAgents[i] =
+            nonBrowserUserAgents[i].toLowerCase(Locale.ENGLISH);
     }
   }
 
@@ -120,7 +122,7 @@ public abstract class AltKerberosAuthenticationHandler
     if (userAgent == null) {
       return false;
     }
-    userAgent = userAgent.toLowerCase();
+    userAgent = userAgent.toLowerCase(Locale.ENGLISH);
     boolean isBrowser = true;
     for (String nonBrowserUserAgent : nonBrowserUserAgents) {
         if (userAgent.contains(nonBrowserUserAgent)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
index b0e8f04..89e07d1 100644
--- a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
+++ b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
@@ -21,6 +21,7 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.List;
+import java.util.Locale;
 import java.util.regex.Pattern;
 
 import org.apache.directory.server.kerberos.shared.keytab.Keytab;
@@ -58,24 +59,25 @@ public class TestKerberosUtil {
 
     // send null hostname
     Assert.assertEquals("When no hostname is sent",
-        service + "/" + localHostname.toLowerCase(),
+        service + "/" + localHostname.toLowerCase(Locale.ENGLISH),
         KerberosUtil.getServicePrincipal(service, null));
     // send empty hostname
     Assert.assertEquals("When empty hostname is sent",
-        service + "/" + localHostname.toLowerCase(),
+        service + "/" + localHostname.toLowerCase(Locale.ENGLISH),
         KerberosUtil.getServicePrincipal(service, ""));
     // send 0.0.0.0 hostname
     Assert.assertEquals("When 0.0.0.0 hostname is sent",
-        service + "/" + localHostname.toLowerCase(),
+        service + "/" + localHostname.toLowerCase(Locale.ENGLISH),
         KerberosUtil.getServicePrincipal(service, "0.0.0.0"));
     // send uppercase hostname
     Assert.assertEquals("When uppercase hostname is sent",
-        service + "/" + testHost.toLowerCase(),
+        service + "/" + testHost.toLowerCase(Locale.ENGLISH),
         KerberosUtil.getServicePrincipal(service, testHost));
     // send lowercase hostname
     Assert.assertEquals("When lowercase hostname is sent",
-        service + "/" + testHost.toLowerCase(),
-        KerberosUtil.getServicePrincipal(service, testHost.toLowerCase()));
+        service + "/" + testHost.toLowerCase(Locale.ENGLISH),
+        KerberosUtil.getServicePrincipal(
+            service, testHost.toLowerCase(Locale.ENGLISH)));
   }
   
   @Test

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index ebe23c7..11785f2 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -409,6 +409,8 @@ Trunk (Unreleased)
     HADOOP-10774. Update KerberosTestUtils for hadoop-auth tests when using
     IBM Java (sangamesh via aw)
 
+    HADOOP-11602. Fix toUpperCase/toLowerCase to use Locale.ENGLISH. (ozawa)
+
   OPTIMIZATIONS
 
     HADOOP-7761. Improve the performance of raw comparisons. (todd)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
index 02654b7..753f515 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
@@ -1451,11 +1451,9 @@ public class Configuration implements Iterable<Map.Entry<String,String>>,
       return defaultValue;
     }
 
-    valueString = valueString.toLowerCase();
-
-    if ("true".equals(valueString))
+    if (StringUtils.equalsIgnoreCase("true", valueString))
       return true;
-    else if ("false".equals(valueString))
+    else if (StringUtils.equalsIgnoreCase("false", valueString))
       return false;
     else return defaultValue;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
index c9355d7..a811aa7 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CipherSuite.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.crypto;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Defines properties of a CipherSuite. Modeled after the ciphers in
@@ -97,7 +98,7 @@ public enum CipherSuite {
     String[] parts = name.split("/");
     StringBuilder suffix = new StringBuilder();
     for (String part : parts) {
-      suffix.append(".").append(part.toLowerCase());
+      suffix.append(".").append(StringUtils.toLowerCase(part));
     }
     
     return suffix.toString();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
index bfec1ef..c0d510d 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/JavaKeyStoreProvider.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.security.ProviderUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -422,7 +423,7 @@ public class JavaKeyStoreProvider extends KeyProvider {
   @Override
   public KeyVersion createKey(String name, byte[] material,
                                Options options) throws IOException {
-    Preconditions.checkArgument(name.equals(name.toLowerCase()),
+    Preconditions.checkArgument(name.equals(StringUtils.toLowerCase(name)),
         "Uppercase key names are unsupported: %s", name);
     writeLock.lock();
     try {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
index cfa5198..42434f1 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
@@ -65,6 +65,7 @@ import org.apache.hadoop.util.DataChecksum;
 import org.apache.hadoop.util.Progressable;
 import org.apache.hadoop.util.ReflectionUtils;
 import org.apache.hadoop.util.ShutdownHookManager;
+import org.apache.hadoop.util.StringUtils;
 
 import com.google.common.annotations.VisibleForTesting;
 
@@ -2795,8 +2796,10 @@ public abstract class FileSystem extends Configured implements Closeable {
       }
 
       Key(URI uri, Configuration conf, long unique) throws IOException {
-        scheme = uri.getScheme()==null?"":uri.getScheme().toLowerCase();
-        authority = uri.getAuthority()==null?"":uri.getAuthority().toLowerCase();
+        scheme = uri.getScheme()==null ?
+            "" : StringUtils.toLowerCase(uri.getScheme());
+        authority = uri.getAuthority()==null ?
+            "" : StringUtils.toLowerCase(uri.getAuthority());
         this.unique = unique;
         
         this.ugi = UserGroupInformation.getCurrentUser();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
index e306502..68069d7 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageType.java
@@ -24,6 +24,7 @@ import java.util.List;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Defines the types of supported storage media. The default storage
@@ -78,7 +79,7 @@ public enum StorageType {
   }
 
   public static StorageType parseStorageType(String s) {
-    return StorageType.valueOf(s.toUpperCase());
+    return StorageType.valueOf(StringUtils.toUpperCase(s));
   }
 
   private static List<StorageType> getNonTransientTypes() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
index b9def64..45402f8 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
@@ -106,7 +106,7 @@ public class AclEntry {
       sb.append("default:");
     }
     if (type != null) {
-      sb.append(type.toString().toLowerCase());
+      sb.append(StringUtils.toLowerCase(type.toString()));
     }
     sb.append(':');
     if (name != null) {
@@ -263,7 +263,8 @@ public class AclEntry {
 
     AclEntryType aclType = null;
     try {
-      aclType = Enum.valueOf(AclEntryType.class, split[index].toUpperCase());
+      aclType = Enum.valueOf(
+          AclEntryType.class, StringUtils.toUpperCase(split[index]));
       builder.setType(aclType);
       index++;
     } catch (IllegalArgumentException iae) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java
index 4efda87..d55c80b 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/XAttrCommands.java
@@ -79,7 +79,7 @@ class XAttrCommands extends FsCommand {
       String en = StringUtils.popOptionWithArgument("-e", args);
       if (en != null) {
         try {
-          encoding = enValueOfFunc.apply(en.toUpperCase(Locale.ENGLISH));
+          encoding = enValueOfFunc.apply(StringUtils.toUpperCase(en));
         } catch (IllegalArgumentException e) {
           throw new IllegalArgumentException(
               "Invalid/unsupported encoding option specified: " + en);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Name.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Name.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Name.java
index 88314c6..c89daa9 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Name.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/find/Name.java
@@ -22,6 +22,7 @@ import java.util.Deque;
 
 import org.apache.hadoop.fs.GlobPattern;
 import org.apache.hadoop.fs.shell.PathData;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Implements the -name expression for the
@@ -73,7 +74,7 @@ final class Name extends BaseExpression {
   public void prepare() throws IOException {
     String argPattern = getArgument(1);
     if (!caseSensitive) {
-      argPattern = argPattern.toLowerCase();
+      argPattern = StringUtils.toLowerCase(argPattern);
     }
     globPattern = new GlobPattern(argPattern);
   }
@@ -82,7 +83,7 @@ final class Name extends BaseExpression {
   public Result apply(PathData item, int depth) throws IOException {
     String name = getPath(item).getName();
     if (!caseSensitive) {
-      name = name.toLowerCase();
+      name = StringUtils.toLowerCase(name);
     }
     if (globPattern.matches(name)) {
       return Result.PASS;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
index 7476a15..8fff75d 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * A factory that will find the correct codec for a given filename.
@@ -66,10 +67,10 @@ public class CompressionCodecFactory {
     codecsByClassName.put(codec.getClass().getCanonicalName(), codec);
 
     String codecName = codec.getClass().getSimpleName();
-    codecsByName.put(codecName.toLowerCase(), codec);
+    codecsByName.put(StringUtils.toLowerCase(codecName), codec);
     if (codecName.endsWith("Codec")) {
       codecName = codecName.substring(0, codecName.length() - "Codec".length());
-      codecsByName.put(codecName.toLowerCase(), codec);
+      codecsByName.put(StringUtils.toLowerCase(codecName), codec);
     }
   }
 
@@ -246,7 +247,7 @@ public class CompressionCodecFactory {
       if (codec == null) {
         // trying to get the codec by name in case the name was specified
         // instead a class
-        codec = codecsByName.get(codecName.toLowerCase());
+        codec = codecsByName.get(StringUtils.toLowerCase(codecName));
       }
       return codec;
     }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java
index 167205e..cbe60b5 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java
@@ -44,6 +44,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.metrics2.MetricsFilter;
 import org.apache.hadoop.metrics2.MetricsPlugin;
 import org.apache.hadoop.metrics2.filter.GlobFilter;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Metrics configuration for MetricsSystemImpl
@@ -85,12 +86,12 @@ class MetricsConfig extends SubsetConfiguration {
   private ClassLoader pluginLoader;
 
   MetricsConfig(Configuration c, String prefix) {
-    super(c, prefix.toLowerCase(Locale.US), ".");
+    super(c, StringUtils.toLowerCase(prefix), ".");
   }
 
   static MetricsConfig create(String prefix) {
-    return loadFirst(prefix, "hadoop-metrics2-"+ prefix.toLowerCase(Locale.US)
-                     +".properties", DEFAULT_FILE_NAME);
+    return loadFirst(prefix, "hadoop-metrics2-" +
+        StringUtils.toLowerCase(prefix) + ".properties", DEFAULT_FILE_NAME);
   }
 
   static MetricsConfig create(String prefix, String... fileNames) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
index 32b00f3..a94d814 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
@@ -61,6 +61,7 @@ import org.apache.hadoop.metrics2.lib.MetricsRegistry;
 import org.apache.hadoop.metrics2.lib.MetricsSourceBuilder;
 import org.apache.hadoop.metrics2.lib.MutableStat;
 import org.apache.hadoop.metrics2.util.MBeans;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Time;
 
 /**
@@ -616,7 +617,7 @@ public class MetricsSystemImpl extends MetricsSystem implements MetricsSource {
     LOG.debug("from environment variable: "+ System.getenv(MS_INIT_MODE_KEY));
     String m = System.getProperty(MS_INIT_MODE_KEY);
     String m2 = m == null ? System.getenv(MS_INIT_MODE_KEY) : m;
-    return InitMode.valueOf((m2 == null ? InitMode.NORMAL.name() : m2)
-                            .toUpperCase(Locale.US));
+    return InitMode.valueOf(
+        StringUtils.toUpperCase((m2 == null ? InitMode.NORMAL.name() : m2)));
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
index 0b49cfb..305443c 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPropertiesResolver.java
@@ -66,7 +66,8 @@ public class SaslPropertiesResolver implements Configurable{
         CommonConfigurationKeysPublic.HADOOP_RPC_PROTECTION,
         QualityOfProtection.AUTHENTICATION.toString());
     for (int i=0; i < qop.length; i++) {
-      qop[i] = QualityOfProtection.valueOf(qop[i].toUpperCase(Locale.ENGLISH)).getSaslQop();
+      qop[i] = QualityOfProtection.valueOf(
+          StringUtils.toUpperCase(qop[i])).getSaslQop();
     }
     properties.put(Sasl.QOP, StringUtils.join(",", qop));
     properties.put(Sasl.SERVER_AUTH, "true");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
index 7cbee26..eddf98d 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
@@ -27,7 +27,6 @@ import java.security.PrivilegedAction;
 import java.security.PrivilegedExceptionAction;
 import java.util.Arrays;
 import java.util.List;
-import java.util.Locale;
 import java.util.ServiceLoader;
 
 import javax.security.auth.kerberos.KerberosPrincipal;
@@ -44,6 +43,7 @@ import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenInfo;
+import org.apache.hadoop.util.StringUtils;
 
 
 //this will need to be replaced someday when there is a suitable replacement
@@ -182,7 +182,8 @@ public class SecurityUtil {
     if (fqdn == null || fqdn.isEmpty() || fqdn.equals("0.0.0.0")) {
       fqdn = getLocalHostName();
     }
-    return components[0] + "/" + fqdn.toLowerCase(Locale.US) + "@" + components[2];
+    return components[0] + "/" +
+        StringUtils.toLowerCase(fqdn) + "@" + components[2];
   }
   
   static String getLocalHostName() throws UnknownHostException {
@@ -379,7 +380,7 @@ public class SecurityUtil {
       }
       host = addr.getAddress().getHostAddress();
     } else {
-      host = addr.getHostName().toLowerCase();
+      host = StringUtils.toLowerCase(addr.getHostName());
     }
     return new Text(host + ":" + addr.getPort());
   }
@@ -606,7 +607,8 @@ public class SecurityUtil {
   public static AuthenticationMethod getAuthenticationMethod(Configuration conf) {
     String value = conf.get(HADOOP_SECURITY_AUTHENTICATION, "simple");
     try {
-      return Enum.valueOf(AuthenticationMethod.class, value.toUpperCase(Locale.ENGLISH));
+      return Enum.valueOf(AuthenticationMethod.class,
+          StringUtils.toUpperCase(value));
     } catch (IllegalArgumentException iae) {
       throw new IllegalArgumentException("Invalid attribute value for " +
           HADOOP_SECURITY_AUTHENTICATION + " of " + value);
@@ -619,7 +621,7 @@ public class SecurityUtil {
       authenticationMethod = AuthenticationMethod.SIMPLE;
     }
     conf.set(HADOOP_SECURITY_AUTHENTICATION,
-             authenticationMethod.toString().toLowerCase(Locale.ENGLISH));
+        StringUtils.toLowerCase(authenticationMethod.toString()));
   }
 
   /*

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
index dc0815e..8d4df64 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/WhitelistBasedResolver.java
@@ -138,7 +138,8 @@ public class WhitelistBasedResolver extends SaslPropertiesResolver {
         QualityOfProtection.PRIVACY.toString());
 
     for (int i=0; i < qop.length; i++) {
-      qop[i] = QualityOfProtection.valueOf(qop[i].toUpperCase()).getSaslQop();
+      qop[i] = QualityOfProtection.valueOf(
+          StringUtils.toUpperCase(qop[i])).getSaslQop();
     }
 
     saslProps.put(Sasl.QOP, StringUtils.join(",", qop));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
index 4b81e17..609c71f 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java
@@ -23,6 +23,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.StringUtils;
 
 import javax.net.ssl.KeyManager;
 import javax.net.ssl.KeyManagerFactory;
@@ -94,7 +95,8 @@ public class FileBasedKeyStoresFactory implements KeyStoresFactory {
   @VisibleForTesting
   public static String resolvePropertyName(SSLFactory.Mode mode,
                                            String template) {
-    return MessageFormat.format(template, mode.toString().toLowerCase());
+    return MessageFormat.format(
+        template, StringUtils.toLowerCase(mode.toString()));
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
index bbea33b..edec347 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
@@ -22,6 +22,7 @@ import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.security.authentication.client.ConnectionConfigurator;
 import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
 import static org.apache.hadoop.util.PlatformName.IBM_JAVA;
 
 import javax.net.ssl.HostnameVerifier;
@@ -137,8 +138,8 @@ public class SSLFactory implements ConnectionConfigurator {
 
   private HostnameVerifier getHostnameVerifier(Configuration conf)
       throws GeneralSecurityException, IOException {
-    return getHostnameVerifier(conf.get(SSL_HOSTNAME_VERIFIER_KEY, "DEFAULT").
-        trim().toUpperCase());
+    return getHostnameVerifier(StringUtils.toUpperCase(
+        conf.get(SSL_HOSTNAME_VERIFIER_KEY, "DEFAULT").trim()));
   }
 
   public static HostnameVerifier getHostnameVerifier(String verifier)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java
index dd5e67b..b5ef2b2 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLHostnameVerifier.java
@@ -52,6 +52,7 @@ import javax.net.ssl.SSLSocket;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  ************************************************************************
@@ -365,7 +366,7 @@ public interface SSLHostnameVerifier extends javax.net.ssl.HostnameVerifier {
             buf.append('<');
             for (int i = 0; i < hosts.length; i++) {
                 String h = hosts[i];
-                h = h != null ? h.trim().toLowerCase() : "";
+                h = h != null ? StringUtils.toLowerCase(h.trim()) : "";
                 hosts[i] = h;
                 if (i > 0) {
                     buf.append('/');
@@ -406,7 +407,7 @@ public interface SSLHostnameVerifier extends javax.net.ssl.HostnameVerifier {
             out:
             for (Iterator<String> it = names.iterator(); it.hasNext();) {
                 // Don't trim the CN, though!
-                final String cn = it.next().toLowerCase();
+                final String cn = StringUtils.toLowerCase(it.next());
                 // Store CN in StringBuffer in case we need to report an error.
                 buf.append(" <");
                 buf.append(cn);
@@ -424,7 +425,8 @@ public interface SSLHostnameVerifier extends javax.net.ssl.HostnameVerifier {
                                      acceptableCountryWildcard(cn);
 
                 for (int i = 0; i < hosts.length; i++) {
-                    final String hostName = hosts[i].trim().toLowerCase();
+                    final String hostName =
+                        StringUtils.toLowerCase(hosts[i].trim());
                     if (doWildcard) {
                         match = hostName.endsWith(cn.substring(1));
                         if (match && strictWithSubDomains) {
@@ -479,7 +481,7 @@ public interface SSLHostnameVerifier extends javax.net.ssl.HostnameVerifier {
         }
 
         public static boolean isLocalhost(String host) {
-            host = host != null ? host.trim().toLowerCase() : "";
+            host = host != null ? StringUtils.toLowerCase(host.trim()) : "";
             if (host.startsWith("::1")) {
                 int x = host.lastIndexOf('%');
                 if (x >= 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
index c18b5d3..c498f70 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticationHandler.java
@@ -47,6 +47,7 @@ import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
 import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager;
 import org.apache.hadoop.util.HttpExceptionUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.codehaus.jackson.map.ObjectMapper;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -169,7 +170,7 @@ public abstract class DelegationTokenAuthenticationHandler
     boolean requestContinues = true;
     String op = ServletUtils.getParameter(request,
         KerberosDelegationTokenAuthenticator.OP_PARAM);
-    op = (op != null) ? op.toUpperCase() : null;
+    op = (op != null) ? StringUtils.toUpperCase(op) : null;
     if (DELEGATION_TOKEN_OPS.contains(op) &&
         !request.getMethod().equals("OPTIONS")) {
       KerberosDelegationTokenAuthenticator.DelegationTokenOperation dtOp =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
index d93f7ac..8a3a57f 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.security.authentication.client.ConnectionConfigurator;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier;
 import org.apache.hadoop.util.HttpExceptionUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.codehaus.jackson.map.ObjectMapper;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -286,7 +287,7 @@ public abstract class DelegationTokenAuthenticator implements Authenticator {
     HttpExceptionUtils.validateResponse(conn, HttpURLConnection.HTTP_OK);
     if (hasResponse) {
       String contentType = conn.getHeaderField(CONTENT_TYPE);
-      contentType = (contentType != null) ? contentType.toLowerCase()
+      contentType = (contentType != null) ? StringUtils.toLowerCase(contentType)
                                           : null;
       if (contentType != null &&
           contentType.contains(APPLICATION_JSON_MIME)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
index 65d85f7..9d34518 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ComparableVersion.java
@@ -37,7 +37,6 @@ import java.util.Arrays;
 import java.util.Iterator;
 import java.util.List;
 import java.util.ListIterator;
-import java.util.Locale;
 import java.util.Properties;
 import java.util.Stack;
 
@@ -363,7 +362,7 @@ public class ComparableVersion
 
         items = new ListItem();
 
-        version = version.toLowerCase( Locale.ENGLISH );
+        version = StringUtils.toLowerCase(version);
 
         ListItem list = items;
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
index ff8edc3..fc4b0ab 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
@@ -18,6 +18,7 @@
 
 package org.apache.hadoop.util;
 
+import com.google.common.base.Preconditions;
 import java.io.PrintWriter;
 import java.io.StringWriter;
 import java.net.URI;
@@ -901,7 +902,7 @@ public class StringUtils {
    */
   public static String camelize(String s) {
     StringBuilder sb = new StringBuilder();
-    String[] words = split(s.toLowerCase(Locale.US), ESCAPE_CHAR, '_');
+    String[] words = split(StringUtils.toLowerCase(s), ESCAPE_CHAR,  '_');
 
     for (String word : words)
       sb.append(org.apache.commons.lang.StringUtils.capitalize(word));
@@ -1032,4 +1033,41 @@ public class StringUtils {
     }
     return null;
   }
+
+  /**
+   * Converts all of the characters in this String to lower case with
+   * Locale.ENGLISH.
+   *
+   * @param str  string to be converted
+   * @return     the str, converted to lowercase.
+   */
+  public static String toLowerCase(String str) {
+    return str.toLowerCase(Locale.ENGLISH);
+  }
+
+  /**
+   * Converts all of the characters in this String to upper case with
+   * Locale.ENGLISH.
+   *
+   * @param str  string to be converted
+   * @return     the str, converted to uppercase.
+   */
+  public static String toUpperCase(String str) {
+    return str.toUpperCase(Locale.ENGLISH);
+  }
+
+  /**
+   * Compare strings locale-freely by using String#equalsIgnoreCase.
+   *
+   * @param s1  Non-null string to be converted
+   * @param s2  string to be converted
+   * @return     the str, converted to uppercase.
+   */
+  public static boolean equalsIgnoreCase(String s1, String s2) {
+    Preconditions.checkNotNull(s1);
+    // don't check non-null against s2 to make the semantics same as
+    // s1.equals(s2)
+    return s1.equalsIgnoreCase(s2);
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
index e2005be..2ca81e9 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileSystemContractBaseTest.java
@@ -20,7 +20,6 @@ package org.apache.hadoop.fs;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
-import java.util.Locale;
 
 import junit.framework.TestCase;
 
@@ -28,6 +27,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * <p>
@@ -527,7 +527,7 @@ public abstract class FileSystemContractBaseTest extends TestCase {
     }
     String mixedCaseFilename = "/test/UPPER.TXT";
     Path upper = path(mixedCaseFilename);
-    Path lower = path(mixedCaseFilename.toLowerCase(Locale.ENGLISH));
+    Path lower = path(StringUtils.toLowerCase(mixedCaseFilename));
     assertFalse("File exists" + upper, fs.exists(upper));
     assertFalse("File exists" + lower, fs.exists(lower));
     FSDataOutputStream out = fs.create(upper);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
index eb19f48..b443011 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestIPC.java
@@ -1296,7 +1296,7 @@ public class TestIPC {
     
     StringBuilder hexString = new StringBuilder();
     
-    for (String line : hexdump.toUpperCase().split("\n")) {
+    for (String line : StringUtils.toUpperCase(hexdump).split("\n")) {
       hexString.append(line.substring(0, LAST_HEX_COL).replace(" ", ""));
     }
     return StringUtils.hexStringToByte(hexString.toString());

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
index 903990b..f6ab380 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestSaslRPC.java
@@ -181,7 +181,7 @@ public class TestSaslRPC {
     StringBuilder sb = new StringBuilder();
     int i = 0;
     for (QualityOfProtection qop:qops){
-     sb.append(qop.name().toLowerCase());
+     sb.append(org.apache.hadoop.util.StringUtils.toLowerCase(qop.name()));
      if (++i < qops.length){
        sb.append(",");
      }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
index 4616c90..e523e18 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java
@@ -18,13 +18,13 @@ package org.apache.hadoop.security;
 
 import static org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION;
 import static org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod.*;
+
 import static org.junit.Assert.*;
 
 import java.io.IOException;
 import java.net.InetAddress;
 import java.net.InetSocketAddress;
 import java.net.URI;
-import java.util.Locale;
 
 import javax.security.auth.kerberos.KerberosPrincipal;
 
@@ -33,6 +33,7 @@ import org.apache.hadoop.io.Text;
 import org.apache.hadoop.net.NetUtils;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
+import org.apache.hadoop.util.StringUtils;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.mockito.Mockito;
@@ -103,13 +104,14 @@ public class TestSecurityUtil {
     String realm = "@REALM";
     String principalInConf = service + SecurityUtil.HOSTNAME_PATTERN + realm;
     String hostname = "FooHost";
-    String principal = service + hostname.toLowerCase() + realm;
+    String principal =
+        service + StringUtils.toLowerCase(hostname) + realm;
     verify(principalInConf, hostname, principal);
   }
 
   @Test
   public void testLocalHostNameForNullOrWild() throws Exception {
-    String local = SecurityUtil.getLocalHostName().toLowerCase(Locale.US);
+    String local = StringUtils.toLowerCase(SecurityUtil.getLocalHostName());
     assertEquals("hdfs/" + local + "@REALM",
                  SecurityUtil.getServerPrincipal("hdfs/_HOST@REALM", (String)null));
     assertEquals("hdfs/" + local + "@REALM",
@@ -260,7 +262,7 @@ public class TestSecurityUtil {
     //LOG.info("address:"+addr+" host:"+host+" ip:"+ip+" port:"+port);
 
     SecurityUtil.setTokenServiceUseIp(useIp);
-    String serviceHost = useIp ? ip : host.toLowerCase();
+    String serviceHost = useIp ? ip : StringUtils.toLowerCase(host);
     
     Token<?> token = new Token<TokenIdentifier>();
     Text service = new Text(serviceHost+":"+port);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
index 48b9b99..5b8eac6 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUserGroupInformation.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.security.authentication.util.KerberosName;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.util.StringUtils;
 import org.junit.*;
 
 import javax.security.auth.Subject;
@@ -213,7 +214,7 @@ public class TestUserGroupInformation {
         userName = userName.substring(sp + 1);
       }
       // user names are case insensitive on Windows. Make consistent
-      userName = userName.toLowerCase();
+      userName = StringUtils.toLowerCase(userName);
     }
     // get the groups
     pp = Runtime.getRuntime().exec(Shell.WINDOWS ?
@@ -233,7 +234,7 @@ public class TestUserGroupInformation {
     String loginUserName = login.getShortUserName();
     if(Shell.WINDOWS) {
       // user names are case insensitive on Windows. Make consistent
-      loginUserName = loginUserName.toLowerCase();
+      loginUserName = StringUtils.toLowerCase(loginUserName);
     }
     assertEquals(userName, loginUserName);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TimedOutTestsListener.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TimedOutTestsListener.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TimedOutTestsListener.java
index 220ab1d..1bdeddb 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TimedOutTestsListener.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TimedOutTestsListener.java
@@ -29,6 +29,7 @@ import java.text.SimpleDateFormat;
 import java.util.Date;
 import java.util.Map;
 
+import org.apache.hadoop.util.StringUtils;
 import org.junit.runner.notification.Failure;
 import org.junit.runner.notification.RunListener;
 
@@ -93,8 +94,9 @@ public class TimedOutTestsListener extends RunListener {
           thread.getPriority(),
           thread.getId(),
           Thread.State.WAITING.equals(thread.getState()) ? 
-              "in Object.wait()" : thread.getState().name().toLowerCase(),
-          Thread.State.WAITING.equals(thread.getState()) ? 
+              "in Object.wait()" :
+              StringUtils.toLowerCase(thread.getState().name()),
+          Thread.State.WAITING.equals(thread.getState()) ?
               "WAITING (on object monitor)" : thread.getState()));
       for (StackTraceElement stackTraceElement : e.getValue()) {
         dump.append("\n        at ");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
index 0c930d4..515c3e0 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java
@@ -18,10 +18,12 @@
 
 package org.apache.hadoop.util;
 
+import java.util.Locale;
 import static org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix.long2String;
 import static org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix.string2long;
 import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -412,6 +414,25 @@ public class TestStringUtils extends UnitTestcaseTimeLimit {
     assertTrue(col.containsAll(Arrays.asList(new String[]{"foo","bar","baz","blah"})));
   }
 
+  @Test
+  public void testLowerAndUpperStrings() {
+    Locale defaultLocale = Locale.getDefault();
+    try {
+      Locale.setDefault(new Locale("tr", "TR"));
+      String upperStr = "TITLE";
+      String lowerStr = "title";
+      // Confirming TR locale.
+      assertNotEquals(lowerStr, upperStr.toLowerCase());
+      assertNotEquals(upperStr, lowerStr.toUpperCase());
+      // This should be true regardless of locale.
+      assertEquals(lowerStr, StringUtils.toLowerCase(upperStr));
+      assertEquals(upperStr, StringUtils.toUpperCase(lowerStr));
+      assertTrue(StringUtils.equalsIgnoreCase(upperStr, lowerStr));
+    } finally {
+      Locale.setDefault(defaultLocale);
+    }
+  }
+
   // Benchmark for StringUtils split
   public static void main(String []args) {
     final String TO_SPLIT = "foo,bar,baz,blah,blah";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
index 2d4e442..8ac6e40 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestWinUtils.java
@@ -382,8 +382,10 @@ public class TestWinUtils {
   private void assertOwners(File file, String expectedUser,
       String expectedGroup) throws IOException {
     String [] args = lsF(file).trim().split("[\\|]");
-    assertEquals(expectedUser.toLowerCase(), args[2].toLowerCase());
-    assertEquals(expectedGroup.toLowerCase(), args[3].toLowerCase());
+    assertEquals(StringUtils.toLowerCase(expectedUser),
+        StringUtils.toLowerCase(args[2]));
+    assertEquals(StringUtils.toLowerCase(expectedGroup),
+        StringUtils.toLowerCase(args[3]));
   }
 
   @Test (timeout = 30000)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
index b617ae5..8b6b46a 100644
--- a/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
+++ b/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/nfs/NfsExports.java
@@ -32,6 +32,7 @@ import org.apache.hadoop.nfs.nfs3.Nfs3Constant;
 import org.apache.hadoop.util.LightWeightCache;
 import org.apache.hadoop.util.LightWeightGSet;
 import org.apache.hadoop.util.LightWeightGSet.LinkedElement;
+import org.apache.hadoop.util.StringUtils;
 
 import com.google.common.base.Preconditions;
 
@@ -359,10 +360,10 @@ public class NfsExports {
     AccessPrivilege privilege = AccessPrivilege.READ_ONLY;
     switch (parts.length) {
     case 1:
-      host = parts[0].toLowerCase().trim();
+      host = StringUtils.toLowerCase(parts[0]).trim();
       break;
     case 2:
-      host = parts[0].toLowerCase().trim();
+      host = StringUtils.toLowerCase(parts[0]).trim();
       String option = parts[1].trim();
       if ("rw".equalsIgnoreCase(option)) {
         privilege = AccessPrivilege.READ_WRITE;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
index 836b4ce..81b0b7a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/CheckUploadContentTypeFilter.java
@@ -21,6 +21,7 @@ package org.apache.hadoop.fs.http.server;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.http.client.HttpFSFileSystem;
+import org.apache.hadoop.util.StringUtils;
 
 import javax.servlet.Filter;
 import javax.servlet.FilterChain;
@@ -82,7 +83,8 @@ public class CheckUploadContentTypeFilter implements Filter {
     String method = httpReq.getMethod();
     if (method.equals("PUT") || method.equals("POST")) {
       String op = httpReq.getParameter(HttpFSFileSystem.OP_PARAM);
-      if (op != null && UPLOAD_OPERATIONS.contains(op.toUpperCase())) {
+      if (op != null && UPLOAD_OPERATIONS.contains(
+          StringUtils.toUpperCase(op))) {
         if ("true".equalsIgnoreCase(httpReq.getParameter(HttpFSParametersProvider.DataParam.NAME))) {
           String contentType = httpReq.getContentType();
           contentTypeOK =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
index 633589c..11cdb4d 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.hdfs.protocol.AclException;
 import org.apache.hadoop.io.IOUtils;
 import org.apache.hadoop.lib.service.FileSystemAccess;
+import org.apache.hadoop.util.StringUtils;
 import org.json.simple.JSONArray;
 import org.json.simple.JSONObject;
 
@@ -439,7 +440,8 @@ public class FSOperations {
     @Override
     public JSONObject execute(FileSystem fs) throws IOException {
       boolean result = fs.truncate(path, newLength);
-      return toJSON(HttpFSFileSystem.TRUNCATE_JSON.toLowerCase(), result);
+      return toJSON(
+          StringUtils.toLowerCase(HttpFSFileSystem.TRUNCATE_JSON), result);
     }
 
   }
@@ -568,7 +570,8 @@ public class FSOperations {
     @Override
     public JSONObject execute(FileSystem fs) throws IOException {
       boolean deleted = fs.delete(path, recursive);
-      return toJSON(HttpFSFileSystem.DELETE_JSON.toLowerCase(), deleted);
+      return toJSON(
+          StringUtils.toLowerCase(HttpFSFileSystem.DELETE_JSON), deleted);
     }
 
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
index 271f3d9..5c4204a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
@@ -30,6 +30,7 @@ import org.apache.hadoop.lib.wsrs.Param;
 import org.apache.hadoop.lib.wsrs.ParametersProvider;
 import org.apache.hadoop.lib.wsrs.ShortParam;
 import org.apache.hadoop.lib.wsrs.StringParam;
+import org.apache.hadoop.util.StringUtils;
 
 import javax.ws.rs.ext.Provider;
 import java.util.HashMap;
@@ -168,7 +169,8 @@ public class HttpFSParametersProvider extends ParametersProvider {
      */
     public OperationParam(String operation) {
       super(NAME, HttpFSFileSystem.Operation.class,
-            HttpFSFileSystem.Operation.valueOf(operation.toUpperCase()));
+            HttpFSFileSystem.Operation.valueOf(
+                StringUtils.toUpperCase(operation)));
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java
index 5c1bb4f..1a0f9ff 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/server/Server.java
@@ -22,6 +22,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.lib.util.Check;
 import org.apache.hadoop.lib.util.ConfigurationUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.log4j.LogManager;
 import org.apache.log4j.PropertyConfigurator;
 import org.slf4j.Logger;
@@ -202,7 +203,7 @@ public class Server {
    * @param config server configuration.
    */
   public Server(String name, String homeDir, String configDir, String logDir, String tempDir, Configuration config) {
-    this.name = Check.notEmpty(name, "name").trim().toLowerCase();
+    this.name = StringUtils.toLowerCase(Check.notEmpty(name, "name").trim());
     this.homeDir = Check.notEmpty(homeDir, "homeDir");
     this.configDir = Check.notEmpty(configDir, "configDir");
     this.logDir = Check.notEmpty(logDir, "logDir");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
index ccb15a3..88780cb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/service/hadoop/FileSystemAccessService.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.lib.service.Scheduler;
 import org.apache.hadoop.lib.util.Check;
 import org.apache.hadoop.lib.util.ConfigurationUtils;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.VersionInfo;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -254,7 +255,7 @@ public class FileSystemAccessService extends BaseService implements FileSystemAc
   private Set<String> toLowerCase(Collection<String> collection) {
     Set<String> set = new HashSet<String>();
     for (String value : collection) {
-      set.add(value.toLowerCase());
+      set.add(StringUtils.toLowerCase(value));
     }
     return set;
   }
@@ -300,7 +301,8 @@ public class FileSystemAccessService extends BaseService implements FileSystemAc
 
   protected void validateNamenode(String namenode) throws FileSystemAccessException {
     if (nameNodeWhitelist.size() > 0 && !nameNodeWhitelist.contains("*")) {
-      if (!nameNodeWhitelist.contains(namenode.toLowerCase())) {
+      if (!nameNodeWhitelist.contains(
+          StringUtils.toLowerCase(namenode))) {
         throw new FileSystemAccessException(FileSystemAccessException.ERROR.H05, namenode, "not in whitelist");
       }
     }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumParam.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumParam.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumParam.java
index 8baef67..f95a6e6 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumParam.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumParam.java
@@ -34,7 +34,7 @@ public abstract class EnumParam<E extends Enum<E>> extends Param<E> {
 
   @Override
   protected E parse(String str) throws Exception {
-    return Enum.valueOf(klass, str.toUpperCase());
+    return Enum.valueOf(klass, StringUtils.toUpperCase(str));
   }
 
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java
index 8d79b71..ba6e5aa 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/EnumSetParam.java
@@ -22,6 +22,7 @@ import java.util.EnumSet;
 import java.util.Iterator;
 
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.util.StringUtils;
 
 @InterfaceAudience.Private
 public abstract class EnumSetParam<E extends Enum<E>> extends Param<EnumSet<E>> {
@@ -37,7 +38,7 @@ public abstract class EnumSetParam<E extends Enum<E>> extends Param<EnumSet<E>>
     final EnumSet<E> set = EnumSet.noneOf(klass);
     if (!str.isEmpty()) {
       for (String sub : str.split(",")) {
-        set.add(Enum.valueOf(klass, sub.trim().toUpperCase()));
+        set.add(Enum.valueOf(klass, StringUtils.toUpperCase(sub.trim())));
       }
     }
     return set;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
index 4703a90..c93f8f2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/wsrs/ParametersProvider.java
@@ -26,6 +26,7 @@ import com.sun.jersey.server.impl.inject.AbstractHttpContextInjectable;
 import com.sun.jersey.spi.inject.Injectable;
 import com.sun.jersey.spi.inject.InjectableProvider;
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.util.StringUtils;
 
 import javax.ws.rs.core.Context;
 import javax.ws.rs.core.MultivaluedMap;
@@ -70,7 +71,7 @@ public class ParametersProvider
     }
     Enum op;
     try {
-      op = Enum.valueOf(enumClass, str.toUpperCase());
+      op = Enum.valueOf(enumClass, StringUtils.toUpperCase(str));
     } catch (IllegalArgumentException ex) {
       throw new IllegalArgumentException(
         MessageFormat.format("Invalid Operation [{0}]", str));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
index 04364ccf..5cafb3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
@@ -24,6 +24,7 @@ import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.XAttr;
 import org.apache.hadoop.fs.XAttr.NameSpace;
+import org.apache.hadoop.util.StringUtils;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
@@ -57,16 +58,20 @@ public class XAttrHelper {
     }
     
     NameSpace ns;
-    final String prefix = name.substring(0, prefixIndex).toLowerCase();
-    if (prefix.equals(NameSpace.USER.toString().toLowerCase())) {
+    final String prefix = name.substring(0, prefixIndex);
+    if (StringUtils.equalsIgnoreCase(prefix, NameSpace.USER.toString())) {
       ns = NameSpace.USER;
-    } else if (prefix.equals(NameSpace.TRUSTED.toString().toLowerCase())) {
+    } else if (
+        StringUtils.equalsIgnoreCase(prefix, NameSpace.TRUSTED.toString())) {
       ns = NameSpace.TRUSTED;
-    } else if (prefix.equals(NameSpace.SYSTEM.toString().toLowerCase())) {
+    } else if (
+        StringUtils.equalsIgnoreCase(prefix, NameSpace.SYSTEM.toString())) {
       ns = NameSpace.SYSTEM;
-    } else if (prefix.equals(NameSpace.SECURITY.toString().toLowerCase())) {
+    } else if (
+        StringUtils.equalsIgnoreCase(prefix, NameSpace.SECURITY.toString())) {
       ns = NameSpace.SECURITY;
-    } else if (prefix.equals(NameSpace.RAW.toString().toLowerCase())) {
+    } else if (
+        StringUtils.equalsIgnoreCase(prefix, NameSpace.RAW.toString())) {
       ns = NameSpace.RAW;
     } else {
       throw new HadoopIllegalArgumentException("An XAttr name must be " +
@@ -145,7 +150,7 @@ public class XAttrHelper {
     }
     
     String namespace = xAttr.getNameSpace().toString();
-    return namespace.toLowerCase() + "." + xAttr.getName();
+    return StringUtils.toLowerCase(namespace) + "." + xAttr.getName();
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
index 54da8eb..7cf8a47 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsConstants.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.server.datanode.DataNodeLayoutVersion;
 import org.apache.hadoop.hdfs.server.namenode.NameNodeLayoutVersion;
 import org.apache.hadoop.hdfs.server.namenode.FSDirectory;
+import org.apache.hadoop.util.StringUtils;
 
 /************************************
  * Some handy constants
@@ -98,7 +99,7 @@ public class HdfsConstants {
 
     /** Covert the given String to a RollingUpgradeAction. */
     public static RollingUpgradeAction fromString(String s) {
-      return MAP.get(s.toUpperCase());
+      return MAP.get(StringUtils.toUpperCase(s));
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
index 0c03a42..020cb5f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
@@ -26,6 +26,7 @@ import org.apache.hadoop.fs.XAttr;
 import org.apache.hadoop.hdfs.XAttrHelper;
 import org.apache.hadoop.hdfs.protocol.BlockStoragePolicy;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.util.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -131,7 +132,8 @@ public class BlockStoragePolicySuite {
   }
 
   public static String buildXAttrName() {
-    return XAttrNS.toString().toLowerCase() + "." + STORAGE_POLICY_XATTR_NAME;
+    return StringUtils.toLowerCase(XAttrNS.toString())
+        + "." + STORAGE_POLICY_XATTR_NAME;
   }
 
   public static XAttr buildXAttr(byte policyId) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
index ff64524..2d267ce 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
@@ -27,6 +27,7 @@ import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext;
 
 import com.google.common.base.Preconditions;
+import org.apache.hadoop.util.StringUtils;
 
 /************************************
  * Some handy internal HDFS constants
@@ -53,7 +54,7 @@ public final class HdfsServerConstants {
 
     public String getOptionString() {
       return StartupOption.ROLLINGUPGRADE.getName() + " "
-          + name().toLowerCase();
+          + StringUtils.toLowerCase(name());
     }
 
     public boolean matches(StartupOption option) {
@@ -84,7 +85,7 @@ public final class HdfsServerConstants {
     public static String getAllOptionString() {
       final StringBuilder b = new StringBuilder("<");
       for(RollingUpgradeStartupOption opt : VALUES) {
-        b.append(opt.name().toLowerCase()).append("|");
+        b.append(StringUtils.toLowerCase(opt.name())).append("|");
       }
       b.setCharAt(b.length() - 1, '>');
       return b.toString();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
index 7cda670..126086f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/StorageLocation.java
@@ -28,6 +28,7 @@ import java.util.regex.Matcher;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.fs.StorageType;
 import org.apache.hadoop.hdfs.server.common.Util;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Encapsulates the URI and storage medium that together describe a
@@ -88,7 +89,8 @@ public class StorageLocation {
       String classString = matcher.group(1);
       location = matcher.group(2);
       if (!classString.isEmpty()) {
-        storageType = StorageType.valueOf(classString.toUpperCase());
+        storageType =
+            StorageType.valueOf(StringUtils.toUpperCase(classString));
       }
     }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
index c41a46a..c768690 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogOp.java
@@ -123,6 +123,7 @@ import org.apache.hadoop.ipc.ClientId;
 import org.apache.hadoop.ipc.RpcConstants;
 import org.apache.hadoop.security.token.delegation.DelegationKey;
 import org.apache.hadoop.util.DataChecksum;
+import org.apache.hadoop.util.StringUtils;
 import org.xml.sax.ContentHandler;
 import org.xml.sax.SAXException;
 import org.xml.sax.helpers.AttributesImpl;
@@ -4348,7 +4349,7 @@ public abstract class FSEditLogOp {
 
     public RollingUpgradeOp(FSEditLogOpCodes code, String name) {
       super(code);
-      this.name = name.toUpperCase();
+      this.name = StringUtils.toUpperCase(name);
     }
 
     static RollingUpgradeOp getStartInstance(OpInstanceCache cache) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaByStorageTypeEntry.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaByStorageTypeEntry.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaByStorageTypeEntry.java
index 711d0f8..39ce2dc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaByStorageTypeEntry.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/QuotaByStorageTypeEntry.java
@@ -19,6 +19,7 @@ package org.apache.hadoop.hdfs.server.namenode;
 
 import com.google.common.base.Objects;
 import org.apache.hadoop.fs.StorageType;
+import org.apache.hadoop.util.StringUtils;
 
 public class QuotaByStorageTypeEntry {
    private StorageType type;
@@ -53,7 +54,7 @@ public class QuotaByStorageTypeEntry {
    public String toString() {
      StringBuilder sb = new StringBuilder();
      assert (type != null);
-     sb.append(type.toString().toLowerCase());
+     sb.append(StringUtils.toLowerCase(type.toString()));
      sb.append(':');
      sb.append(quota);
      return sb.toString();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
index 83e6426..ec7e0c9 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SecondaryNameNode.java
@@ -587,7 +587,7 @@ public class SecondaryNameNode implements Runnable,
       return 0;
     }
     
-    String cmd = opts.getCommand().toString().toLowerCase();
+    String cmd = StringUtils.toLowerCase(opts.getCommand().toString());
     
     int exitCode = 0;
     try {


[39/43] hadoop git commit: MAPREDUCE-5583. Ability to limit running map and reduce tasks. Contributed by Jason Lowe.

Posted by zj...@apache.org.
MAPREDUCE-5583. Ability to limit running map and reduce tasks. Contributed by Jason Lowe.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/4228de94
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/4228de94
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/4228de94

Branch: refs/heads/YARN-2928
Commit: 4228de94028f1e10ca59ce23e963e488fe566909
Parents: 9ae7f9e
Author: Junping Du <ju...@apache.org>
Authored: Tue Mar 3 02:01:04 2015 -0800
Committer: Junping Du <ju...@apache.org>
Committed: Tue Mar 3 02:02:28 2015 -0800

----------------------------------------------------------------------
 hadoop-mapreduce-project/CHANGES.txt            |   3 +
 .../v2/app/rm/RMContainerAllocator.java         |  65 +++++-
 .../v2/app/rm/RMContainerRequestor.java         |  74 ++++++-
 .../v2/app/rm/TestRMContainerAllocator.java     | 214 +++++++++++++++++++
 .../apache/hadoop/mapreduce/MRJobConfig.java    |   8 +
 .../src/main/resources/mapred-default.xml       |  16 ++
 6 files changed, 363 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/4228de94/hadoop-mapreduce-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt
index 5524b14..7a2eff3 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -258,6 +258,9 @@ Release 2.7.0 - UNRELEASED
 
     MAPREDUCE-6228. Add truncate operation to SLive. (Plamen Jeliazkov via shv)
 
+    MAPREDUCE-5583. Ability to limit running map and reduce tasks. 
+    (Jason Lowe via junping_du)
+
   IMPROVEMENTS
 
     MAPREDUCE-6149. Document override log4j.properties in MR job.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4228de94/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
index 1acfeec..efea674 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java
@@ -99,9 +99,9 @@ public class RMContainerAllocator extends RMContainerRequestor
   public static final 
   float DEFAULT_COMPLETED_MAPS_PERCENT_FOR_REDUCE_SLOWSTART = 0.05f;
   
-  private static final Priority PRIORITY_FAST_FAIL_MAP;
-  private static final Priority PRIORITY_REDUCE;
-  private static final Priority PRIORITY_MAP;
+  static final Priority PRIORITY_FAST_FAIL_MAP;
+  static final Priority PRIORITY_REDUCE;
+  static final Priority PRIORITY_MAP;
 
   @VisibleForTesting
   public static final String RAMPDOWN_DIAGNOSTIC = "Reducer preempted "
@@ -166,6 +166,8 @@ public class RMContainerAllocator extends RMContainerRequestor
    */
   private long allocationDelayThresholdMs = 0;
   private float reduceSlowStart = 0;
+  private int maxRunningMaps = 0;
+  private int maxRunningReduces = 0;
   private long retryInterval;
   private long retrystartTime;
   private Clock clock;
@@ -201,6 +203,10 @@ public class RMContainerAllocator extends RMContainerRequestor
     allocationDelayThresholdMs = conf.getInt(
         MRJobConfig.MR_JOB_REDUCER_PREEMPT_DELAY_SEC,
         MRJobConfig.DEFAULT_MR_JOB_REDUCER_PREEMPT_DELAY_SEC) * 1000;//sec -> ms
+    maxRunningMaps = conf.getInt(MRJobConfig.JOB_RUNNING_MAP_LIMIT,
+        MRJobConfig.DEFAULT_JOB_RUNNING_MAP_LIMIT);
+    maxRunningReduces = conf.getInt(MRJobConfig.JOB_RUNNING_REDUCE_LIMIT,
+        MRJobConfig.DEFAULT_JOB_RUNNING_REDUCE_LIMIT);
     RackResolver.init(conf);
     retryInterval = getConfig().getLong(MRJobConfig.MR_AM_TO_RM_WAIT_INTERVAL_MS,
                                 MRJobConfig.DEFAULT_MR_AM_TO_RM_WAIT_INTERVAL_MS);
@@ -664,6 +670,8 @@ public class RMContainerAllocator extends RMContainerRequestor
   
   @SuppressWarnings("unchecked")
   private List<Container> getResources() throws Exception {
+    applyConcurrentTaskLimits();
+
     // will be null the first time
     Resource headRoom =
         getAvailableResources() == null ? Resources.none() :
@@ -778,6 +786,43 @@ public class RMContainerAllocator extends RMContainerRequestor
     return newContainers;
   }
 
+  private void applyConcurrentTaskLimits() {
+    int numScheduledMaps = scheduledRequests.maps.size();
+    if (maxRunningMaps > 0 && numScheduledMaps > 0) {
+      int maxRequestedMaps = Math.max(0,
+          maxRunningMaps - assignedRequests.maps.size());
+      int numScheduledFailMaps = scheduledRequests.earlierFailedMaps.size();
+      int failedMapRequestLimit = Math.min(maxRequestedMaps,
+          numScheduledFailMaps);
+      int normalMapRequestLimit = Math.min(
+          maxRequestedMaps - failedMapRequestLimit,
+          numScheduledMaps - numScheduledFailMaps);
+      setRequestLimit(PRIORITY_FAST_FAIL_MAP, mapResourceRequest,
+          failedMapRequestLimit);
+      setRequestLimit(PRIORITY_MAP, mapResourceRequest, normalMapRequestLimit);
+    }
+
+    int numScheduledReduces = scheduledRequests.reduces.size();
+    if (maxRunningReduces > 0 && numScheduledReduces > 0) {
+      int maxRequestedReduces = Math.max(0,
+          maxRunningReduces - assignedRequests.reduces.size());
+      int reduceRequestLimit = Math.min(maxRequestedReduces,
+          numScheduledReduces);
+      setRequestLimit(PRIORITY_REDUCE, reduceResourceRequest,
+          reduceRequestLimit);
+    }
+  }
+
+  private boolean canAssignMaps() {
+    return (maxRunningMaps <= 0
+        || assignedRequests.maps.size() < maxRunningMaps);
+  }
+
+  private boolean canAssignReduces() {
+    return (maxRunningReduces <= 0
+        || assignedRequests.reduces.size() < maxRunningReduces);
+  }
+
   private void updateAMRMToken(Token token) throws IOException {
     org.apache.hadoop.security.token.Token<AMRMTokenIdentifier> amrmToken =
         new org.apache.hadoop.security.token.Token<AMRMTokenIdentifier>(token
@@ -1046,8 +1091,7 @@ public class RMContainerAllocator extends RMContainerRequestor
       it = allocatedContainers.iterator();
       while (it.hasNext()) {
         Container allocated = it.next();
-        LOG.info("Releasing unassigned and invalid container " 
-            + allocated + ". RM may have assignment issues");
+        LOG.info("Releasing unassigned container " + allocated);
         containerNotAssigned(allocated);
       }
     }
@@ -1150,7 +1194,8 @@ public class RMContainerAllocator extends RMContainerRequestor
     private ContainerRequest assignToFailedMap(Container allocated) {
       //try to assign to earlierFailedMaps if present
       ContainerRequest assigned = null;
-      while (assigned == null && earlierFailedMaps.size() > 0) {
+      while (assigned == null && earlierFailedMaps.size() > 0
+          && canAssignMaps()) {
         TaskAttemptId tId = earlierFailedMaps.removeFirst();      
         if (maps.containsKey(tId)) {
           assigned = maps.remove(tId);
@@ -1168,7 +1213,7 @@ public class RMContainerAllocator extends RMContainerRequestor
     private ContainerRequest assignToReduce(Container allocated) {
       ContainerRequest assigned = null;
       //try to assign to reduces if present
-      if (assigned == null && reduces.size() > 0) {
+      if (assigned == null && reduces.size() > 0 && canAssignReduces()) {
         TaskAttemptId tId = reduces.keySet().iterator().next();
         assigned = reduces.remove(tId);
         LOG.info("Assigned to reduce");
@@ -1180,7 +1225,7 @@ public class RMContainerAllocator extends RMContainerRequestor
     private void assignMapsWithLocality(List<Container> allocatedContainers) {
       // try to assign to all nodes first to match node local
       Iterator<Container> it = allocatedContainers.iterator();
-      while(it.hasNext() && maps.size() > 0){
+      while(it.hasNext() && maps.size() > 0 && canAssignMaps()){
         Container allocated = it.next();        
         Priority priority = allocated.getPriority();
         assert PRIORITY_MAP.equals(priority);
@@ -1212,7 +1257,7 @@ public class RMContainerAllocator extends RMContainerRequestor
       
       // try to match all rack local
       it = allocatedContainers.iterator();
-      while(it.hasNext() && maps.size() > 0){
+      while(it.hasNext() && maps.size() > 0 && canAssignMaps()){
         Container allocated = it.next();
         Priority priority = allocated.getPriority();
         assert PRIORITY_MAP.equals(priority);
@@ -1242,7 +1287,7 @@ public class RMContainerAllocator extends RMContainerRequestor
       
       // assign remaining
       it = allocatedContainers.iterator();
-      while(it.hasNext() && maps.size() > 0){
+      while(it.hasNext() && maps.size() > 0 && canAssignMaps()){
         Container allocated = it.next();
         Priority priority = allocated.getPriority();
         assert PRIORITY_MAP.equals(priority);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4228de94/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
index bb9ad02..1666864 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerRequestor.java
@@ -22,6 +22,7 @@ import java.io.IOException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.HashMap;
+import java.util.Iterator;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeMap;
@@ -44,6 +45,7 @@ import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceBlacklistRequest;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
+import org.apache.hadoop.yarn.api.records.ResourceRequest.ResourceRequestComparator;
 import org.apache.hadoop.yarn.exceptions.YarnException;
 import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
 import org.apache.hadoop.yarn.factories.RecordFactory;
@@ -58,6 +60,8 @@ import com.google.common.annotations.VisibleForTesting;
 public abstract class RMContainerRequestor extends RMCommunicator {
   
   private static final Log LOG = LogFactory.getLog(RMContainerRequestor.class);
+  private static final ResourceRequestComparator RESOURCE_REQUEST_COMPARATOR =
+      new ResourceRequestComparator();
 
   protected int lastResponseID;
   private Resource availableResources;
@@ -77,12 +81,18 @@ public abstract class RMContainerRequestor extends RMCommunicator {
   // use custom comparator to make sure ResourceRequest objects differing only in 
   // numContainers dont end up as duplicates
   private final Set<ResourceRequest> ask = new TreeSet<ResourceRequest>(
-      new org.apache.hadoop.yarn.api.records.ResourceRequest.ResourceRequestComparator());
+      RESOURCE_REQUEST_COMPARATOR);
   private final Set<ContainerId> release = new TreeSet<ContainerId>();
   // pendingRelease holds history or release requests.request is removed only if
   // RM sends completedContainer.
   // How it different from release? --> release is for per allocate() request.
   protected Set<ContainerId> pendingRelease = new TreeSet<ContainerId>();
+
+  private final Map<ResourceRequest,ResourceRequest> requestLimits =
+      new TreeMap<ResourceRequest,ResourceRequest>(RESOURCE_REQUEST_COMPARATOR);
+  private final Set<ResourceRequest> requestLimitsToUpdate =
+      new TreeSet<ResourceRequest>(RESOURCE_REQUEST_COMPARATOR);
+
   private boolean nodeBlacklistingEnabled;
   private int blacklistDisablePercent;
   private AtomicBoolean ignoreBlacklisting = new AtomicBoolean(false);
@@ -178,6 +188,7 @@ public abstract class RMContainerRequestor extends RMCommunicator {
 
   protected AllocateResponse makeRemoteRequest() throws YarnException,
       IOException {
+    applyRequestLimits();
     ResourceBlacklistRequest blacklistRequest =
         ResourceBlacklistRequest.newInstance(new ArrayList<String>(blacklistAdditions),
             new ArrayList<String>(blacklistRemovals));
@@ -190,13 +201,14 @@ public abstract class RMContainerRequestor extends RMCommunicator {
     availableResources = allocateResponse.getAvailableResources();
     lastClusterNmCount = clusterNmCount;
     clusterNmCount = allocateResponse.getNumClusterNodes();
+    int numCompletedContainers =
+        allocateResponse.getCompletedContainersStatuses().size();
 
     if (ask.size() > 0 || release.size() > 0) {
       LOG.info("getResources() for " + applicationId + ":" + " ask="
           + ask.size() + " release= " + release.size() + " newContainers="
           + allocateResponse.getAllocatedContainers().size()
-          + " finishedContainers="
-          + allocateResponse.getCompletedContainersStatuses().size()
+          + " finishedContainers=" + numCompletedContainers
           + " resourcelimit=" + availableResources + " knownNMs="
           + clusterNmCount);
     }
@@ -204,6 +216,12 @@ public abstract class RMContainerRequestor extends RMCommunicator {
     ask.clear();
     release.clear();
 
+    if (numCompletedContainers > 0) {
+      // re-send limited requests when a container completes to trigger asking
+      // for more containers
+      requestLimitsToUpdate.addAll(requestLimits.keySet());
+    }
+
     if (blacklistAdditions.size() > 0 || blacklistRemovals.size() > 0) {
       LOG.info("Update the blacklist for " + applicationId +
           ": blacklistAdditions=" + blacklistAdditions.size() +
@@ -214,6 +232,36 @@ public abstract class RMContainerRequestor extends RMCommunicator {
     return allocateResponse;
   }
 
+  private void applyRequestLimits() {
+    Iterator<ResourceRequest> iter = requestLimits.values().iterator();
+    while (iter.hasNext()) {
+      ResourceRequest reqLimit = iter.next();
+      int limit = reqLimit.getNumContainers();
+      Map<String, Map<Resource, ResourceRequest>> remoteRequests =
+          remoteRequestsTable.get(reqLimit.getPriority());
+      Map<Resource, ResourceRequest> reqMap = (remoteRequests != null)
+          ? remoteRequests.get(ResourceRequest.ANY) : null;
+      ResourceRequest req = (reqMap != null)
+          ? reqMap.get(reqLimit.getCapability()) : null;
+      if (req == null) {
+        continue;
+      }
+      // update an existing ask or send a new one if updating
+      if (ask.remove(req) || requestLimitsToUpdate.contains(req)) {
+        ResourceRequest newReq = req.getNumContainers() > limit
+            ? reqLimit : req;
+        ask.add(newReq);
+        LOG.info("Applying ask limit of " + newReq.getNumContainers()
+            + " for priority:" + reqLimit.getPriority()
+            + " and capability:" + reqLimit.getCapability());
+      }
+      if (limit == Integer.MAX_VALUE) {
+        iter.remove();
+      }
+    }
+    requestLimitsToUpdate.clear();
+  }
+
   protected void addOutstandingRequestOnResync() {
     for (Map<String, Map<Resource, ResourceRequest>> rr : remoteRequestsTable
         .values()) {
@@ -229,6 +277,7 @@ public abstract class RMContainerRequestor extends RMCommunicator {
     if (!pendingRelease.isEmpty()) {
       release.addAll(pendingRelease);
     }
+    requestLimitsToUpdate.addAll(requestLimits.keySet());
   }
 
   // May be incorrect if there's multiple NodeManagers running on a single host.
@@ -459,10 +508,8 @@ public abstract class RMContainerRequestor extends RMCommunicator {
   private void addResourceRequestToAsk(ResourceRequest remoteRequest) {
     // because objects inside the resource map can be deleted ask can end up 
     // containing an object that matches new resource object but with different
-    // numContainers. So exisintg values must be replaced explicitly
-    if(ask.contains(remoteRequest)) {
-      ask.remove(remoteRequest);
-    }
+    // numContainers. So existing values must be replaced explicitly
+    ask.remove(remoteRequest);
     ask.add(remoteRequest);    
   }
 
@@ -490,6 +537,19 @@ public abstract class RMContainerRequestor extends RMCommunicator {
     return newReq;
   }
   
+  protected void setRequestLimit(Priority priority, Resource capability,
+      int limit) {
+    if (limit < 0) {
+      limit = Integer.MAX_VALUE;
+    }
+    ResourceRequest newReqLimit = ResourceRequest.newInstance(priority,
+        ResourceRequest.ANY, capability, limit);
+    ResourceRequest oldReqLimit = requestLimits.put(newReqLimit, newReqLimit);
+    if (oldReqLimit == null || oldReqLimit.getNumContainers() < limit) {
+      requestLimitsToUpdate.add(newReqLimit);
+    }
+  }
+
   public Set<String> getBlacklistedNodes() {
     return blacklistedNodes;
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4228de94/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
index 4759693..eca1a4d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/rm/TestRMContainerAllocator.java
@@ -31,9 +31,11 @@ import static org.mockito.Mockito.verify;
 import static org.mockito.Mockito.when;
 
 import java.io.IOException;
+import java.nio.ByteBuffer;
 import java.security.PrivilegedExceptionAction;
 import java.util.ArrayList;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.EnumSet;
 import java.util.HashMap;
 import java.util.HashSet;
@@ -81,7 +83,13 @@ import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.test.GenericTestUtils;
 import org.apache.hadoop.util.Time;
 import org.apache.hadoop.yarn.api.ApplicationMasterProtocol;
+import org.apache.hadoop.yarn.api.protocolrecords.AllocateRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.AllocateResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterRequest;
+import org.apache.hadoop.yarn.api.protocolrecords.FinishApplicationMasterResponse;
+import org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterRequest;
+import org.apache.hadoop.yarn.api.protocolrecords.RegisterApplicationMasterResponse;
+import org.apache.hadoop.yarn.api.records.ApplicationAccessType;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.Container;
@@ -89,6 +97,10 @@ import org.apache.hadoop.yarn.api.records.ContainerExitStatus;
 import org.apache.hadoop.yarn.api.records.ContainerId;
 import org.apache.hadoop.yarn.api.records.ContainerState;
 import org.apache.hadoop.yarn.api.records.ContainerStatus;
+import org.apache.hadoop.yarn.api.records.NMToken;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.NodeReport;
+import org.apache.hadoop.yarn.api.records.Priority;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
@@ -2387,6 +2399,208 @@ public class TestRMContainerAllocator {
         new Text(rmAddr), ugiToken.getService());
   }
 
+  @Test
+  public void testConcurrentTaskLimits() throws Exception {
+    final int MAP_LIMIT = 3;
+    final int REDUCE_LIMIT = 1;
+    LOG.info("Running testConcurrentTaskLimits");
+    Configuration conf = new Configuration();
+    conf.setInt(MRJobConfig.JOB_RUNNING_MAP_LIMIT, MAP_LIMIT);
+    conf.setInt(MRJobConfig.JOB_RUNNING_REDUCE_LIMIT, REDUCE_LIMIT);
+    conf.setFloat(MRJobConfig.COMPLETED_MAPS_FOR_REDUCE_SLOWSTART, 1.0f);
+    ApplicationId appId = ApplicationId.newInstance(1, 1);
+    ApplicationAttemptId appAttemptId = ApplicationAttemptId.newInstance(
+        appId, 1);
+    JobId jobId = MRBuilderUtils.newJobId(appAttemptId.getApplicationId(), 0);
+    Job mockJob = mock(Job.class);
+    when(mockJob.getReport()).thenReturn(
+        MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0,
+            0, 0, 0, 0, 0, 0, "jobfile", null, false, ""));
+    final MockScheduler mockScheduler = new MockScheduler(appAttemptId);
+    MyContainerAllocator allocator = new MyContainerAllocator(null, conf,
+        appAttemptId, mockJob) {
+          @Override
+          protected void register() {
+          }
+
+          @Override
+          protected ApplicationMasterProtocol createSchedulerProxy() {
+            return mockScheduler;
+          }
+    };
+
+    // create some map requests
+    ContainerRequestEvent[] reqMapEvents = new ContainerRequestEvent[5];
+    for (int i = 0; i < reqMapEvents.length; ++i) {
+      reqMapEvents[i] = createReq(jobId, i, 1024, new String[] { "h" + i });
+    }
+    allocator.sendRequests(Arrays.asList(reqMapEvents));
+
+    // create some reduce requests
+    ContainerRequestEvent[] reqReduceEvents = new ContainerRequestEvent[2];
+    for (int i = 0; i < reqReduceEvents.length; ++i) {
+      reqReduceEvents[i] = createReq(jobId, i, 1024, new String[] {},
+          false, true);
+    }
+    allocator.sendRequests(Arrays.asList(reqReduceEvents));
+    allocator.schedule();
+
+    // verify all of the host-specific asks were sent plus one for the
+    // default rack and one for the ANY request
+    Assert.assertEquals(reqMapEvents.length + 2, mockScheduler.lastAsk.size());
+
+    // verify AM is only asking for the map limit overall
+    Assert.assertEquals(MAP_LIMIT, mockScheduler.lastAnyAskMap);
+
+    // assign a map task and verify we do not ask for any more maps
+    ContainerId cid0 = mockScheduler.assignContainer("h0", false);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(2, mockScheduler.lastAnyAskMap);
+
+    // complete the map task and verify that we ask for one more
+    mockScheduler.completeContainer(cid0);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(3, mockScheduler.lastAnyAskMap);
+
+    // assign three more maps and verify we ask for no more maps
+    ContainerId cid1 = mockScheduler.assignContainer("h1", false);
+    ContainerId cid2 = mockScheduler.assignContainer("h2", false);
+    ContainerId cid3 = mockScheduler.assignContainer("h3", false);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(0, mockScheduler.lastAnyAskMap);
+
+    // complete two containers and verify we only asked for one more
+    // since at that point all maps should be scheduled/completed
+    mockScheduler.completeContainer(cid2);
+    mockScheduler.completeContainer(cid3);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(1, mockScheduler.lastAnyAskMap);
+
+    // allocate the last container and complete the first one
+    // and verify there are no more map asks.
+    mockScheduler.completeContainer(cid1);
+    ContainerId cid4 = mockScheduler.assignContainer("h4", false);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(0, mockScheduler.lastAnyAskMap);
+
+    // complete the last map
+    mockScheduler.completeContainer(cid4);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(0, mockScheduler.lastAnyAskMap);
+
+    // verify only reduce limit being requested
+    Assert.assertEquals(REDUCE_LIMIT, mockScheduler.lastAnyAskReduce);
+
+    // assign a reducer and verify ask goes to zero
+    cid0 = mockScheduler.assignContainer("h0", true);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(0, mockScheduler.lastAnyAskReduce);
+
+    // complete the reducer and verify we ask for another
+    mockScheduler.completeContainer(cid0);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(1, mockScheduler.lastAnyAskReduce);
+
+    // assign a reducer and verify ask goes to zero
+    cid0 = mockScheduler.assignContainer("h0", true);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(0, mockScheduler.lastAnyAskReduce);
+
+    // complete the reducer and verify no more reducers
+    mockScheduler.completeContainer(cid0);
+    allocator.schedule();
+    allocator.schedule();
+    Assert.assertEquals(0, mockScheduler.lastAnyAskReduce);
+    allocator.close();
+  }
+
+  private static class MockScheduler implements ApplicationMasterProtocol {
+    ApplicationAttemptId attemptId;
+    long nextContainerId = 10;
+    List<ResourceRequest> lastAsk = null;
+    int lastAnyAskMap = 0;
+    int lastAnyAskReduce = 0;
+    List<ContainerStatus> containersToComplete =
+        new ArrayList<ContainerStatus>();
+    List<Container> containersToAllocate = new ArrayList<Container>();
+
+    public MockScheduler(ApplicationAttemptId attemptId) {
+      this.attemptId = attemptId;
+    }
+
+    @Override
+    public RegisterApplicationMasterResponse registerApplicationMaster(
+        RegisterApplicationMasterRequest request) throws YarnException,
+        IOException {
+      return RegisterApplicationMasterResponse.newInstance(
+          Resource.newInstance(512, 1),
+          Resource.newInstance(512000, 1024),
+          Collections.<ApplicationAccessType,String>emptyMap(),
+          ByteBuffer.wrap("fake_key".getBytes()),
+          Collections.<Container>emptyList(),
+          "default",
+          Collections.<NMToken>emptyList());
+    }
+
+    @Override
+    public FinishApplicationMasterResponse finishApplicationMaster(
+        FinishApplicationMasterRequest request) throws YarnException,
+        IOException {
+      return FinishApplicationMasterResponse.newInstance(false);
+    }
+
+    @Override
+    public AllocateResponse allocate(AllocateRequest request)
+        throws YarnException, IOException {
+      lastAsk = request.getAskList();
+      for (ResourceRequest req : lastAsk) {
+        if (ResourceRequest.ANY.equals(req.getResourceName())) {
+          Priority priority = req.getPriority();
+          if (priority.equals(RMContainerAllocator.PRIORITY_MAP)) {
+            lastAnyAskMap = req.getNumContainers();
+          } else if (priority.equals(RMContainerAllocator.PRIORITY_REDUCE)){
+            lastAnyAskReduce = req.getNumContainers();
+          }
+        }
+      }
+      AllocateResponse response =  AllocateResponse.newInstance(
+          request.getResponseId(),
+          containersToComplete, containersToAllocate,
+          Collections.<NodeReport>emptyList(),
+          Resource.newInstance(512000, 1024), null, 10, null,
+          Collections.<NMToken>emptyList());
+      containersToComplete.clear();
+      containersToAllocate.clear();
+      return response;
+    }
+
+    public ContainerId assignContainer(String nodeName, boolean isReduce) {
+      ContainerId containerId =
+          ContainerId.newContainerId(attemptId, nextContainerId++);
+      Priority priority = isReduce ? RMContainerAllocator.PRIORITY_REDUCE
+          : RMContainerAllocator.PRIORITY_MAP;
+      Container container = Container.newInstance(containerId,
+          NodeId.newInstance(nodeName, 1234), nodeName + ":5678",
+        Resource.newInstance(1024, 1), priority, null);
+      containersToAllocate.add(container);
+      return containerId;
+    }
+
+    public void completeContainer(ContainerId containerId) {
+      containersToComplete.add(ContainerStatus.newInstance(containerId,
+          ContainerState.COMPLETE, "", 0));
+    }
+  }
+
   public static void main(String[] args) throws Exception {
     TestRMContainerAllocator t = new TestRMContainerAllocator();
     t.testSimple();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4228de94/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
index d06b075..5527103 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/MRJobConfig.java
@@ -373,6 +373,14 @@ public interface MRJobConfig {
 
   public static final String DEFAULT_JOB_ACL_MODIFY_JOB = " ";
   
+  public static final String JOB_RUNNING_MAP_LIMIT =
+      "mapreduce.job.running.map.limit";
+  public static final int DEFAULT_JOB_RUNNING_MAP_LIMIT = 0;
+
+  public static final String JOB_RUNNING_REDUCE_LIMIT =
+      "mapreduce.job.running.reduce.limit";
+  public static final int DEFAULT_JOB_RUNNING_REDUCE_LIMIT = 0;
+
   /* config for tracking the local file where all the credentials for the job
    * credentials.
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/4228de94/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
index 6e80679..d864756 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
@@ -83,6 +83,22 @@
 </property>
 
 <property>
+  <name>mapreduce.job.running.map.limit</name>
+  <value>0</value>
+  <description>The maximum number of simultaneous map tasks per job.
+  There is no limit if this value is 0 or negative.
+  </description>
+</property>
+
+<property>
+  <name>mapreduce.job.running.reduce.limit</name>
+  <value>0</value>
+  <description>The maximum number of simultaneous reduce tasks per job.
+  There is no limit if this value is 0 or negative.
+  </description>
+</property>
+
+<property>
   <name>mapreduce.job.reducer.preempt.delay.sec</name>
   <value>0</value>
   <description>The threshold in terms of seconds after which an unsatisfied mapper 


[31/43] hadoop git commit: HADOOP-11605. FilterFileSystem#create with ChecksumOpt should propagate it to wrapped FS. (gera)

Posted by zj...@apache.org.
HADOOP-11605. FilterFileSystem#create with ChecksumOpt should propagate it to wrapped FS. (gera)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b18d3830
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b18d3830
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b18d3830

Branch: refs/heads/YARN-2928
Commit: b18d3830aca00f44d31e42839578f24eecedf2c8
Parents: 431e7d8
Author: Gera Shegalov <ge...@apache.org>
Authored: Tue Feb 17 00:24:37 2015 -0800
Committer: Gera Shegalov <ge...@apache.org>
Committed: Mon Mar 2 18:09:23 2015 -0800

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt                   | 3 +++
 .../src/main/java/org/apache/hadoop/fs/FilterFileSystem.java      | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b18d3830/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index b8ed286..ebe23c7 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1031,6 +1031,9 @@ Release 2.7.0 - UNRELEASED
     HADOOP-11615. Update ServiceLevelAuth.md for YARN.
     (Brahma Reddy Battula via aajisaka)
 
+    HADOOP-11605. FilterFileSystem#create with ChecksumOpt should propagate it
+    to wrapped FS. (gera)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b18d3830/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
index d4080ad..d14a272 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFileSystem.java
@@ -190,7 +190,7 @@ public class FilterFileSystem extends FileSystem {
         Progressable progress,
         ChecksumOpt checksumOpt) throws IOException {
     return fs.create(f, permission,
-      flags, bufferSize, replication, blockSize, progress);
+      flags, bufferSize, replication, blockSize, progress, checksumOpt);
   }
   
   @Override


[40/43] hadoop git commit: HDFS-7757. Misleading error messages in FSImage.java. (Contributed by Brahma Reddy Battula)

Posted by zj...@apache.org.
HDFS-7757. Misleading error messages in FSImage.java. (Contributed by Brahma Reddy Battula)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/1004473a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/1004473a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/1004473a

Branch: refs/heads/YARN-2928
Commit: 1004473aa612ee3703394943f25687aa5bef47ea
Parents: 4228de9
Author: Arpit Agarwal <ar...@apache.org>
Authored: Tue Mar 3 10:55:22 2015 -0800
Committer: Arpit Agarwal <ar...@apache.org>
Committed: Tue Mar 3 10:55:22 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt                    | 3 +++
 .../java/org/apache/hadoop/hdfs/server/namenode/FSImage.java   | 6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/1004473a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index fe78097..42430ef 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1074,6 +1074,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-7871. NameNodeEditLogRoller can keep printing "Swallowing exception"
     message. (jing9)
 
+    HDFS-7757. Misleading error messages in FSImage.java. (Brahma Reddy Battula
+    via Arpit Agarwal)
+
     BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
       HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/1004473a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
index 44c41d0..e589eea 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
@@ -883,7 +883,7 @@ public class FSImage implements Closeable {
       final long namespace = counts.getNameSpace() - parentNamespace;
       final long nsQuota = q.getNameSpace();
       if (Quota.isViolated(nsQuota, namespace)) {
-        LOG.error("BUG: Namespace quota violation in image for "
+        LOG.warn("Namespace quota violation in image for "
             + dir.getFullPathName()
             + " quota = " + nsQuota + " < consumed = " + namespace);
       }
@@ -891,7 +891,7 @@ public class FSImage implements Closeable {
       final long ssConsumed = counts.getStorageSpace() - parentStoragespace;
       final long ssQuota = q.getStorageSpace();
       if (Quota.isViolated(ssQuota, ssConsumed)) {
-        LOG.error("BUG: Storagespace quota violation in image for "
+        LOG.warn("Storagespace quota violation in image for "
             + dir.getFullPathName()
             + " quota = " + ssQuota + " < consumed = " + ssConsumed);
       }
@@ -903,7 +903,7 @@ public class FSImage implements Closeable {
             parentTypeSpaces.get(t);
         final long typeQuota = q.getTypeSpaces().get(t);
         if (Quota.isViolated(typeQuota, typeSpace)) {
-          LOG.error("BUG: Storage type quota violation in image for "
+          LOG.warn("Storage type quota violation in image for "
               + dir.getFullPathName()
               + " type = " + t.toString() + " quota = "
               + typeQuota + " < consumed " + typeSpace);


[09/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
deleted file mode 100644
index a08c19d..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerRestart.apt.vm
+++ /dev/null
@@ -1,298 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  ResourceManager Restart
-  ---
-  ---
-  ${maven.build.timestamp}
-
-ResourceManager Restart
-
-%{toc|section=1|fromDepth=0}
-
-* {Overview}
-
-  ResourceManager is the central authority that manages resources and schedules
-  applications running atop of YARN. Hence, it is potentially a single point of
-  failure in a Apache YARN cluster.
-
-  This document gives an overview of ResourceManager Restart, a feature that
-  enhances ResourceManager to keep functioning across restarts and also makes
-  ResourceManager down-time invisible to end-users.
-
-  ResourceManager Restart feature is divided into two phases:
-
-  ResourceManager Restart Phase 1 (Non-work-preserving RM restart):
-  Enhance RM to persist application/attempt state
-  and other credentials information in a pluggable state-store. RM will reload
-  this information from state-store upon restart and re-kick the previously
-  running applications. Users are not required to re-submit the applications.
-
-  ResourceManager Restart Phase 2 (Work-preserving RM restart):
-  Focus on re-constructing the running state of ResourceManager by combining
-  the container statuses from NodeManagers and container requests from ApplicationMasters
-  upon restart. The key difference from phase 1 is that previously running applications
-  will not be killed after RM restarts, and so applications won't lose its work
-  because of RM outage.
-
-* {Feature}
-
-** Phase 1: Non-work-preserving RM restart
-
-  As of Hadoop 2.4.0 release, only ResourceManager Restart Phase 1 is implemented which
-  is described below.
-
-  The overall concept is that RM will persist the application metadata
-  (i.e. ApplicationSubmissionContext) in
-  a pluggable state-store when client submits an application and also saves the final status
-  of the application such as the completion state (failed, killed, finished) 
-  and diagnostics when the application completes. Besides, RM also saves
-  the credentials like security keys, tokens to work in a secure environment.
-  Any time RM shuts down, as long as the required information (i.e.application metadata
-  and the alongside credentials if running in a secure environment) is available
-  in the state-store, when RM restarts, it can pick up the application metadata
-  from the state-store and re-submit the application. RM won't re-submit the
-  applications if they were already completed (i.e. failed, killed, finished)
-  before RM went down.
-
-  NodeManagers and clients during the down-time of RM will keep polling RM until 
-  RM comes up. When RM becomes alive, it will send a re-sync command to
-  all the NodeManagers and ApplicationMasters it was talking to via heartbeats.
-  As of Hadoop 2.4.0 release, the behaviors for NodeManagers and ApplicationMasters to handle this command
-  are: NMs will kill all its managed containers and re-register with RM. From the
-  RM's perspective, these re-registered NodeManagers are similar to the newly joining NMs. 
-  AMs(e.g. MapReduce AM) are expected to shutdown when they receive the re-sync command.
-  After RM restarts and loads all the application metadata, credentials from state-store
-  and populates them into memory, it will create a new
-  attempt (i.e. ApplicationMaster) for each application that was not yet completed
-  and re-kick that application as usual. As described before, the previously running
-  applications' work is lost in this manner since they are essentially killed by
-  RM via the re-sync command on restart.
-
-** Phase 2: Work-preserving RM restart
-
-  As of Hadoop 2.6.0, we further enhanced RM restart feature to address the problem 
-  to not kill any applications running on YARN cluster if RM restarts.
-
-  Beyond all the groundwork that has been done in Phase 1 to ensure the persistency
-  of application state and reload that state on recovery, Phase 2 primarily focuses
-  on re-constructing the entire running state of YARN cluster, the majority of which is
-  the state of the central scheduler inside RM which keeps track of all containers' life-cycle,
-  applications' headroom and resource requests, queues' resource usage etc. In this way,
-  RM doesn't need to kill the AM and re-run the application from scratch as it is
-  done in Phase 1. Applications can simply re-sync back with RM and
-  resume from where it were left off.
-
-  RM recovers its runing state by taking advantage of the container statuses sent from all NMs.
-  NM will not kill the containers when it re-syncs with the restarted RM. It continues
-  managing the containers and send the container statuses across to RM when it re-registers.
-  RM reconstructs the container instances and the associated applications' scheduling status by
-  absorbing these containers' information. In the meantime, AM needs to re-send the
-  outstanding resource requests to RM because RM may lose the unfulfilled requests when it shuts down.
-  Application writers using AMRMClient library to communicate with RM do not need to
-  worry about the part of AM re-sending resource requests to RM on re-sync, as it is
-  automatically taken care by the library itself.
-
-* {Configurations}
-
-** Enable RM Restart.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Value                                |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.recovery.enabled>>> | |
-| | <<<true>>> |
-*--------------------------------------+--------------------------------------+ 
-
-
-** Configure the state-store for persisting the RM state.
-
-
-*--------------------------------------*--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.store.class>>> | |
-| | The class name of the state-store to be used for saving application/attempt |
-| | state and the credentials. The available state-store implementations are  |
-| | <<<org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore>>> |
-| | , a ZooKeeper based state-store implementation and  |
-| | <<<org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore>>> |
-| | , a Hadoop FileSystem based state-store implementation like HDFS and local FS. |
-| | <<<org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore>>>, |
-| | a LevelDB based state-store implementation. |
-| | The default value is set to |
-| | <<<org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore>>>. |
-*--------------------------------------+--------------------------------------+ 
-
-** How to choose the state-store implementation.
-
-    <<ZooKeeper based state-store>>: User is free to pick up any storage to set up RM restart,
-    but must use ZooKeeper based state-store to support RM HA. The reason is that only ZooKeeper
-    based state-store supports fencing mechanism to avoid a split-brain situation where multiple
-    RMs assume they are active and can edit the state-store at the same time.
-
-    <<FileSystem based state-store>>: HDFS and local FS based state-store are supported. 
-    Fencing mechanism is not supported.
-
-    <<LevelDB based state-store>>: LevelDB based state-store is considered more light weight than HDFS and ZooKeeper
-    based state-store. LevelDB supports better atomic operations, fewer I/O ops per state update,
-    and far fewer total files on the filesystem. Fencing mechanism is not supported.
-
-** Configurations for Hadoop FileSystem based state-store implementation.
-
-    Support both HDFS and local FS based state-store implementation. The type of file system to
-    be used is determined by the scheme of URI. e.g. <<<hdfs://localhost:9000/rmstore>>> uses HDFS as the storage and
-    <<<file:///tmp/yarn/rmstore>>> uses local FS as the storage. If no
-    scheme(<<<hdfs://>>> or <<<file://>>>) is specified in the URI, the type of storage to be used is
-    determined by <<<fs.defaultFS>>> defined in <<<core-site.xml>>>.
-
-    Configure the URI where the RM state will be saved in the Hadoop FileSystem state-store.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.fs.state-store.uri>>> | |
-| | URI pointing to the location of the FileSystem path where RM state will |
-| | be stored (e.g. hdfs://localhost:9000/rmstore). |
-| | Default value is <<<${hadoop.tmp.dir}/yarn/system/rmstore>>>. |
-| | If FileSystem name is not provided, <<<fs.default.name>>> specified in |
-| | <<conf/core-site.xml>> will be used. |
-*--------------------------------------+--------------------------------------+ 
-
-    Configure the retry policy state-store client uses to connect with the Hadoop
-    FileSystem.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.fs.state-store.retry-policy-spec>>> | |
-| | Hadoop FileSystem client retry policy specification. Hadoop FileSystem client retry | 
-| | is always enabled. Specified in pairs of sleep-time and number-of-retries | 
-| | i.e. (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on |
-| | average, the following n1 retries sleep t1 milliseconds on average, and so on. |
-| | Default value is (2000, 500) |
-*--------------------------------------+--------------------------------------+ 
-
-** Configurations for ZooKeeper based state-store implementation.
-  
-    Configure the ZooKeeper server address and the root path where the RM state is stored.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.zk-address>>> | |
-| | Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server |
-| | (e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") to be used by the RM |
-| | for storing RM state. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.zk-state-store.parent-path>>> | |
-| | The full path of the root znode where RM state will be stored. |
-| | Default value is /rmstore. |
-*--------------------------------------+--------------------------------------+
-
-    Configure the retry policy state-store client uses to connect with the ZooKeeper server.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.zk-num-retries>>> | |
-| | Number of times RM tries to connect to ZooKeeper server if the connection is lost. |
-| | Default value is 500. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.zk-retry-interval-ms>>> |
-| | The interval in milliseconds between retries when connecting to a ZooKeeper server. |
-| | Default value is 2 seconds. |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.zk-timeout-ms>>> | |
-| | ZooKeeper session timeout in milliseconds. This configuration is used by  |
-| | the ZooKeeper server to determine when the session expires. Session expiration |
-| | happens when the server does not hear from the client (i.e. no heartbeat) within the session |
-| | timeout period specified by this configuration. Default |
-| | value is 10 seconds |
-*--------------------------------------+--------------------------------------+
-
-    Configure the ACLs to be used for setting permissions on ZooKeeper znodes.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.zk-acl>>> | |
-| | ACLs to be used for setting permissions on ZooKeeper znodes. Default value is <<<world:anyone:rwcda>>> |
-*--------------------------------------+--------------------------------------+
-
-** Configurations for LevelDB based state-store implementation.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.leveldb-state-store.path>>> | |
-| | Local path where the RM state will be stored. |
-| | Default value is <<<${hadoop.tmp.dir}/yarn/system/rmstore>>> |
-*--------------------------------------+--------------------------------------+
-
-
-**  Configurations for work-preserving RM recovery.
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms>>> | |
-| | Set the amount of time RM waits before allocating new |
-| | containers on RM work-preserving recovery. Such wait period gives RM a chance | 
-| | to settle down resyncing with NMs in the cluster on recovery, before assigning|
-| |  new containers to applications.|
-*--------------------------------------+--------------------------------------+
-
-* {Notes}
-
-  ContainerId string format is changed if RM restarts with work-preserving recovery enabled.
-  It used to be such format:
-
-   Container_\{clusterTimestamp\}_\{appId\}_\{attemptId\}_\{containerId\}, e.g. Container_1410901177871_0001_01_000005.
-
-  It is now changed to:
-
-   Container_<<e\{epoch\}>>_\{clusterTimestamp\}_\{appId\}_\{attemptId\}_\{containerId\}, e.g. Container_<<e17>>_1410901177871_0001_01_000005.
- 
-  Here, the additional epoch number is a
-  monotonically increasing integer which starts from 0 and is increased by 1 each time
-  RM restarts. If epoch number is 0, it is omitted and the containerId string format
-  stays the same as before.
-
-* {Sample configurations}
-
-   Below is a minimum set of configurations for enabling RM work-preserving restart using ZooKeeper based state store.
-
-+---+
-  <property>
-    <description>Enable RM to recover state after starting. If true, then 
-    yarn.resourcemanager.store.class must be specified</description>
-    <name>yarn.resourcemanager.recovery.enabled</name>
-    <value>true</value>
-  </property>
-
-  <property>
-    <description>The class to use as the persistent store.</description>
-    <name>yarn.resourcemanager.store.class</name>
-    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
-  </property>
-
-  <property>
-    <description>Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server
-    (e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") to be used by the RM for storing RM state.
-    This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
-    as the value for yarn.resourcemanager.store.class</description>
-    <name>yarn.resourcemanager.zk-address</name>
-    <value>127.0.0.1:2181</value>
-  </property>
-+---+
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/SecureContainer.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/SecureContainer.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/SecureContainer.apt.vm
deleted file mode 100644
index 0365bf7..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/SecureContainer.apt.vm
+++ /dev/null
@@ -1,176 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  YARN Secure Containers
-  ---
-  ---
-  ${maven.build.timestamp}
-
-YARN Secure Containers
-
-%{toc|section=1|fromDepth=0|toDepth=3}
-
-* {Overview}
-
-  YARN containers in a secure cluster use the operating system facilities to offer
-  execution isolation for containers. Secure containers execute under the credentials
-  of the job user. The operating system enforces access restriction for the container.
-  The container must run as the use that submitted the application.
-  
-  Secure Containers work only in the context of secured YARN clusters.
-  
-  ** Container isolation requirements
-  
-    The container executor must access the local files and directories needed by the 
-    container such as jars, configuration files, log files, shared objects etc. Although
-    it is launched by the NodeManager, the container should not have access to the 
-    NodeManager private files and configuration. Container running applications 
-    submitted by different users should be isolated and unable to access each other
-    files and directories. Similar requirements apply to other system non-file securable 
-    objects like named pipes, critical sections, LPC queues, shared memory etc.
-    
-    
-  ** Linux Secure Container Executor
-
-    On Linux environment the secure container executor is the <<<LinuxContainerExecutor>>>.
-    It uses an external program called the <<container-executor>>> to launch the container.
-    This program has the <<<setuid>>> access right flag set which allows it to launch 
-    the container with the permissions of the YARN application user.
-    
-  *** Configuration
-
-      The configured directories for <<<yarn.nodemanager.local-dirs>>> and 
-      <<<yarn.nodemanager.log-dirs>>> must be owned by the configured NodeManager user
-      (<<<yarn>>>) and group (<<<hadoop>>>). The permission set on these directories must
-      be <<<drwxr-xr-x>>>.
-      
-      The <<<container-executor>>> program must be owned by <<<root>>> and have the
-      permission set <<<---sr-s--->>>.
-
-      To configure the <<<NodeManager>>> to use the <<<LinuxContainerExecutor>>> set the following 
-      in the <<conf/yarn-site.xml>>:
-
-+---+
-<property>
-  <name>yarn.nodemanager.container-executor.class</name>
-  <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
-</property>
-
-<property>
-  <name>yarn.nodemanager.linux-container-executor.group</name>
-  <value>hadoop</value>
-</property>
-+---+
-
-      Additionally the LCE requires the <<<container-executor.cfg>>> file, which is read by the
-      <<<container-executor>>> program. 
-
-+---+
-yarn.nodemanager.linux-container-executor.group=#configured value of yarn.nodemanager.linux-container-executor.group
-banned.users=#comma separated list of users who can not run applications
-allowed.system.users=#comma separated list of allowed system users
-min.user.id=1000#Prevent other super-users
-+---+
-
-   
-  ** Windows Secure Container Executor (WSCE)
-  
-    The Windows environment secure container executor is the <<<WindowsSecureContainerExecutor>>>.
-    It uses the Windows S4U infrastructure to launch the container as the 
-    YARN application user. The WSCE requires the presense of the <<<hadoopwinutilsvc>>>  service. This services
-    is hosted by <<<%HADOOP_HOME%\bin\winutils.exe>>> started with the <<<service>>> command line argument. This
-    service offers some privileged operations that require LocalSystem authority so that the NM is not required
-    to run the entire JVM and all the NM code in an elevated context. The NM interacts with the <<<hadoopwintulsvc>>>
-    service by means of Local RPC (LRPC) via calls JNI to the RCP client hosted in <<<hadoop.dll>>>.
-    
-  *** Configuration
-  
-      To configure the <<<NodeManager>>> to use the <<<WindowsSecureContainerExecutor>>> 
-      set the following in the <<conf/yarn-site.xml>>:
-
-+---+
-<property>
-  <name>yarn.nodemanager.container-executor.class</name>
-  <value>org.apache.hadoop.yarn.server.nodemanager.WindowsSecureContainerExecutor</value>
-</property>
-
-<property>
-  <name>yarn.nodemanager.windows-secure-container-executor.group</name>
-  <value>yarn</value>
-</property>
-+---+
-  *** wsce-site.xml
-  
-      The hadoopwinutilsvc uses <<<%HADOOP_HOME%\etc\hadoop\wsce_site.xml>>> to configure access to the privileged operations.
-
-+---+
-  <property>
-    <name>yarn.nodemanager.windows-secure-container-executor.impersonate.allowed</name>
-    <value>HadoopUsers</value>
-  </property>
-  
-  <property>
-    <name>yarn.nodemanager.windows-secure-container-executor.impersonate.denied</name>
-    <value>HadoopServices,Administrators</value>
-  </property>
-  
-  <property>
-    <name>yarn.nodemanager.windows-secure-container-executor.allowed</name>
-    <value>nodemanager</value>
-  </property>
-
-  <property>
-    <name>yarn.nodemanager.windows-secure-container-executor.local-dirs</name>
-    <value>nm-local-dir, nm-log-dirs</value>
-  </property>
-
-  <property>
-    <name>yarn.nodemanager.windows-secure-container-executor.job-name</name>
-    <value>nodemanager-job-name</value>
-  </property>  
-+---+
-      
-      <<<yarn.nodemanager.windows-secure-container-executor.allowed>>> should contain the name of the service account running the 
-      nodemanager. This user will be allowed to access the hadoopwintuilsvc functions.
-      
-      <<<yarn.nodemanager.windows-secure-container-executor.impersonate.allowed>>> should contain the users that are allowed to create
-      containers in the cluster. These users will be allowed to be impersonated by hadoopwinutilsvc.
-      
-      <<<yarn.nodemanager.windows-secure-container-executor.impersonate.denied>>> should contain users that are explictly forbiden from
-      creating containers. hadoopwinutilsvc will refuse to impersonate these users.
-
-      <<<yarn.nodemanager.windows-secure-container-executor.local-dirs>>> should contain the nodemanager local dirs. hadoopwinutilsvc will
-      allow only file operations under these directories. This should contain the same values as <<<${yarn.nodemanager.local-dirs}, ${yarn.nodemanager.log-dirs}>>> 
-      but note that hadoopwinutilsvc XML configuration processing does not do substitutions so the value must be the final value. All paths 
-      must be absolute and no environment variable substitution will be performed. The paths are compared LOCAL_INVARIANT case insensitive string comparison,
-      the file path validated must start with one of the paths listed in local-dirs configuration. Use comma as path separator:<<<,>>>
-
-      <<<yarn.nodemanager.windows-secure-container-executor.job-name>>> should contain an Windows NT job name that all containers should be added to. 
-      This configuration is optional. If not set, the container is not added to a global NodeManager job. Normally this should be set to the job that the NM is assigned to, 
-      so that killing the NM kills also all containers. Hadoopwinutilsvc will not attempt to create this job, the job must exists when the container is launched.
-      If the value is set and the job does not exists, container launch will fail with error 2 <<<The system cannot find the file specified>>>.
-      Note that this global NM job is not related to the container job, which always gets created for each container and is named after the container ID.
-      This setting controls a global job that spans all containers and the parent NM, and as such it requires nested jobs. 
-      Nested jobs are available only post Windows 8 and Windows Server 2012.
-      
-  *** Useful Links
-    
-    * {{{http://msdn.microsoft.com/en-us/magazine/cc188757.aspx}Exploring S4U Kerberos Extensions in Windows Server 2003}}
-    
-    * {{{http://msdn.microsoft.com/en-us/library/windows/desktop/hh448388(v=vs.85).aspx}Nested Jobs}}
-
-    * {{{https://issues.apache.org/jira/browse/YARN-1063}Winutils needs ability to create task as domain user}}
-    
-    * {{{https://issues.apache.org/jira/browse/YARN-1972}Implement secure Windows Container Executor}}
-
-    * {{{https://issues.apache.org/jira/browse/YARN-2198}Remove the need to run NodeManager as privileged account for Windows Secure Container Executor}}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/TimelineServer.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/TimelineServer.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/TimelineServer.apt.vm
deleted file mode 100644
index 7bb504e..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/TimelineServer.apt.vm
+++ /dev/null
@@ -1,260 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  YARN Timeline Server
-  ---
-  ---
-  ${maven.build.timestamp}
-
-YARN Timeline Server
-
-%{toc|section=1|fromDepth=0|toDepth=3}
-
-* Overview
-
-  Storage and retrieval of applications' current as well as historic
-  information in a generic fashion is solved in YARN through the Timeline
-  Server (previously also called Generic Application History Server). This
-  serves two responsibilities:
-
-  ** Generic information about completed applications
-  
-    Generic information includes application level data like queue-name, user
-    information etc in the ApplicationSubmissionContext, list of
-    application-attempts that ran for an application, information about each
-    application-attempt, list of containers run under each application-attempt,
-    and information about each container. Generic data is stored by
-    ResourceManager to a history-store (default implementation on a file-system)
-    and used by the web-UI to display information about completed applications.
-
-  ** Per-framework information of running and completed applications
-
-    Per-framework information is completely specific to an application or
-    framework. For example, Hadoop MapReduce framework can include pieces of
-    information like number of map tasks, reduce tasks, counters etc.
-    Application developers can publish the specific information to the Timeline
-    server via TimelineClient from within a client, the ApplicationMaster
-    and/or the application's containers. This information is then queryable via
-    REST APIs for rendering by application/framework specific UIs. 
-
-* Current Status
-
-  Timeline sever is a work in progress. The basic storage and retrieval of
-  information, both generic and framework specific, are in place. Timeline
-  server doesn't work in secure mode yet. The generic information and the
-  per-framework information are today collected and presented separately and
-  thus are not integrated well together. Finally, the per-framework information
-  is only available via RESTful APIs, using JSON type content - ability to
-  install framework specific UIs in YARN isn't supported yet.
-
-* Basic Configuration
-
-  Users need to configure the Timeline server before starting it. The simplest
-  configuration you should add in <<<yarn-site.xml>>> is to set the hostname of
-  the Timeline server:
-
-+---+
-<property>
-  <description>The hostname of the Timeline service web application.</description>
-  <name>yarn.timeline-service.hostname</name>
-  <value>0.0.0.0</value>
-</property>
-+---+
-
-* Advanced Configuration
-
-  In addition to the hostname, admins can also configure whether the service is
-  enabled or not, the ports of the RPC and the web interfaces, and the number
-  of RPC handler threads.
-
-+---+
-
-<property>
-  <description>Address for the Timeline server to start the RPC server.</description>
-  <name>yarn.timeline-service.address</name>
-  <value>${yarn.timeline-service.hostname}:10200</value>
-</property>
-
-<property>
-  <description>The http address of the Timeline service web application.</description>
-  <name>yarn.timeline-service.webapp.address</name>
-  <value>${yarn.timeline-service.hostname}:8188</value>
-</property>
-
-<property>
-  <description>The https address of the Timeline service web application.</description>
-  <name>yarn.timeline-service.webapp.https.address</name>
-  <value>${yarn.timeline-service.hostname}:8190</value>
-</property>
-
-<property>
-  <description>Handler thread count to serve the client RPC requests.</description>
-  <name>yarn.timeline-service.handler-thread-count</name>
-  <value>10</value>
-</property>
-
-<property>
-  <description>Enables cross-origin support (CORS) for web services where
-  cross-origin web response headers are needed. For example, javascript making
-  a web services request to the timeline server.</description>
-  <name>yarn.timeline-service.http-cross-origin.enabled</name>
-  <value>false</value>
-</property>
-
-<property>
-  <description>Comma separated list of origins that are allowed for web
-  services needing cross-origin (CORS) support. Wildcards (*) and patterns
-  allowed</description>
-  <name>yarn.timeline-service.http-cross-origin.allowed-origins</name>
-  <value>*</value>
-</property>
-
-<property>
-  <description>Comma separated list of methods that are allowed for web
-  services needing cross-origin (CORS) support.</description>
-  <name>yarn.timeline-service.http-cross-origin.allowed-methods</name>
-  <value>GET,POST,HEAD</value>
-</property>
-
-<property>
-  <description>Comma separated list of headers that are allowed for web
-  services needing cross-origin (CORS) support.</description>
-  <name>yarn.timeline-service.http-cross-origin.allowed-headers</name>
-  <value>X-Requested-With,Content-Type,Accept,Origin</value>
-</property>
-
-<property>
-  <description>The number of seconds a pre-flighted request can be cached
-  for web services needing cross-origin (CORS) support.</description>
-  <name>yarn.timeline-service.http-cross-origin.max-age</name>
-  <value>1800</value>
-</property>
-+---+
-
-* Generic-data related Configuration
-
-  Users can specify whether the generic data collection is enabled or not, and
-  also choose the storage-implementation class for the generic data. There are
-  more configurations related to generic data collection, and users can refer
-  to <<<yarn-default.xml>>> for all of them.
-
-+---+
-<property>
-  <description>Indicate to ResourceManager as well as clients whether
-  history-service is enabled or not. If enabled, ResourceManager starts
-  recording historical data that Timelien service can consume. Similarly,
-  clients can redirect to the history service when applications
-  finish if this is enabled.</description>
-  <name>yarn.timeline-service.generic-application-history.enabled</name>
-  <value>false</value>
-</property>
-
-<property>
-  <description>Store class name for history store, defaulting to file system
-  store</description>
-  <name>yarn.timeline-service.generic-application-history.store-class</name>
-  <value>org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore</value>
-</property>
-+---+
-
-* Per-framework-date related Configuration
-
-  Users can specify whether per-framework data service is enabled or not,
-  choose the store implementation for the per-framework data, and tune the
-  retention of the per-framework data. There are more configurations related to
-  per-framework data service, and users can refer to <<<yarn-default.xml>>> for
-  all of them.
-
-+---+
-<property>
-  <description>Indicate to clients whether Timeline service is enabled or not.
-  If enabled, the TimelineClient library used by end-users will post entities
-  and events to the Timeline server.</description>
-  <name>yarn.timeline-service.enabled</name>
-  <value>true</value>
-</property>
-
-<property>
-  <description>Store class name for timeline store.</description>
-  <name>yarn.timeline-service.store-class</name>
-  <value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
-</property>
-
-<property>
-  <description>Enable age off of timeline store data.</description>
-  <name>yarn.timeline-service.ttl-enable</name>
-  <value>true</value>
-</property>
-
-<property>
-  <description>Time to live for timeline store data in milliseconds.</description>
-  <name>yarn.timeline-service.ttl-ms</name>
-  <value>604800000</value>
-</property>
-+---+
-
-* Running Timeline server
-
-  Assuming all the aforementioned configurations are set properly, admins can
-  start the Timeline server/history service with the following command:
-
-+---+
-  $ yarn timelineserver
-+---+
-
-  Or users can start the Timeline server / history service as a daemon:
-
-+---+
-  $ yarn --daemon start timelineserver
-+---+
-
-* Accessing generic-data via command-line
-
-  Users can access applications' generic historic data via the command line as
-  below. Note that the same commands are usable to obtain the corresponding
-  information about running applications.
-
-+---+
-  $ yarn application -status <Application ID>
-  $ yarn applicationattempt -list <Application ID>
-  $ yarn applicationattempt -status <Application Attempt ID>
-  $ yarn container -list <Application Attempt ID>
-  $ yarn container -status <Container ID>
-+---+
-
-* Publishing of per-framework data by applications
-
-  Developers can define what information they want to record for their
-  applications by composing <<<TimelineEntity>>> and <<<TimelineEvent>>>
-  objects, and put the entities and events to the Timeline server via
-  <<<TimelineClient>>>. Below is an example:
-
-+---+
-  // Create and start the Timeline client
-  TimelineClient client = TimelineClient.createTimelineClient();
-  client.init(conf);
-  client.start();
-
-  TimelineEntity entity = null;
-  // Compose the entity
-  try {
-    TimelinePutResponse response = client.putEntities(entity);
-  } catch (IOException e) {
-    // Handle the exception
-  } catch (YarnException e) {
-    // Handle the exception
-  }
-
-  // Stop the Timeline client
-  client.stop();
-+---+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebApplicationProxy.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebApplicationProxy.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebApplicationProxy.apt.vm
deleted file mode 100644
index 4646235..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebApplicationProxy.apt.vm
+++ /dev/null
@@ -1,49 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  YARN
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Web Application Proxy
-
-  The Web Application Proxy is part of YARN.  By default it will run as part of
-  the Resource Manager(RM), but can be configured to run in stand alone mode.
-  The reason for the proxy is to reduce the possibility of web based attacks
-  through YARN.
-
-  In YARN the Application Master(AM) has the responsibility to provide a web UI
-  and to send that link to the RM.  This opens up a number of potential
-  issues.  The RM runs as a trusted user, and people visiting that web
-  address will treat it, and links it provides to them as trusted, when in
-  reality the AM is running as a non-trusted user, and the links it gives to
-  the RM could point to anything malicious or otherwise.  The Web Application
-  Proxy mitigates this risk by warning users that do not own the given
-  application that they are connecting to an untrusted site.
-
-  In addition to this the proxy also tries to reduce the impact that a malicious
-  AM could have on a user.  It primarily does this by stripping out cookies from
-  the user, and replacing them with a single cookie providing the user name of
-  the logged in user.  This is because most web based authentication systems will
-  identify a user based off of a cookie.  By providing this cookie to an
-  untrusted application it opens up the potential for an exploit.  If the cookie
-  is designed properly that potential should be fairly minimal, but this is just
-  to reduce that potential attack vector.  The current proxy implementation does
-  nothing to prevent the AM from providing links to malicious external sites,
-  nor does it do anything to prevent malicious javascript code from running as
-  well.  In fact javascript can be used to get the cookies, so stripping the
-  cookies from the request has minimal benefit at this time.
-
-  In the future we hope to address the attack vectors described above and make
-  attaching to an AM's web UI safer.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebServicesIntro.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebServicesIntro.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebServicesIntro.apt.vm
deleted file mode 100644
index 5300b94..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WebServicesIntro.apt.vm
+++ /dev/null
@@ -1,593 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop YARN - Introduction to the web services REST API's.
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop YARN - Introduction to the web services REST API's.
-
-%{toc|section=1|fromDepth=0}
-
-* Overview
-
-  The Hadoop YARN web service REST APIs are a set of URI resources that give access to the cluster, nodes, applications, and application historical information. The URI resources are grouped into APIs based on the type of information returned. Some URI resources return collections while others return singletons. 
-  
-* URI's
-
-  The URIs for the REST-based Web services have the following syntax:
-
-------
-  http://{http address of service}/ws/{version}/{resourcepath}
-------
-
-  The elements in this syntax are as follows:
-
-------
-  {http address of service} - The http address of the service to get information about. 
-                              Currently supported are the ResourceManager, NodeManager, 
-                              MapReduce application master, and history server.
-  {version} - The version of the APIs. In this release, the version is v1.
-  {resourcepath} - A path that defines a singleton resource or a collection of resources. 
-------
-
-* HTTP Requests
-
-  To invoke a REST API, your application calls an HTTP operation on the URI associated with a resource. 
-
-** Summary of HTTP operations
- 
-  Currently only GET is supported. It retrieves information about the resource specified.
-
-** Security
-
-  The web service REST API's go through the same security as the web ui.  If your cluster adminstrators have filters enabled you must authenticate via the mechanism they specified. 
-
-** Headers Supported
-
------
-  * Accept 
-  * Accept-Encoding
------
-
-  Currently the only fields used in the header is Accept and Accept-Encoding.  Accept currently supports XML and JSON for the response type you accept. Accept-Encoding currently only supports gzip format and will return gzip compressed output if this is specified, otherwise output is uncompressed. All other header fields are ignored.
-
-* HTTP Responses
-
-  The next few sections describe some of the syntax and other details of the HTTP Responses of the web service REST APIs.
-
-** Compression 
-
-  This release supports gzip compression if you specify gzip in the Accept-Encoding header of the HTTP request (Accept-Encoding: gzip).
-
-** Response Formats
-
-  This release of the web service REST APIs supports responses in JSON and XML formats. JSON is the default. To set the response format, you can specify the format in the Accept header of the HTTP request. 
-
-  As specified in HTTP Response Codes, the response body can contain the data that represents the resource or an error message. In the case of success, the response body is in the selected format, either JSON or XML. In the case of error, the resonse body is in either JSON or XML based on the format requested. The Content-Type header of the response contains the format requested. If the application requests an unsupported format, the response status code is 500.
-Note that the order of the fields within response body is not specified and might change. Also, additional fields might be added to a response body. Therefore, your applications should use parsing routines that can extract data from a response body in any order.
-
-** Response Errors
-
-  After calling an HTTP request, an application should check the response status code to verify success or detect an error. If the response status code indicates an error, the response body contains an error message. The first field is the exception type, currently only RemoteException is returned. The following table lists the items within the RemoteException error message:
-
-*---------------*--------------*-------------------------------*
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| exception     | String       | Exception type                |
-*---------------+--------------+-------------------------------+
-| javaClassName | String       | Java class name of exception  |
-*---------------+--------------+-------------------------------+
-| message       | String       | Detailed message of exception |
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-*** JSON response with single resource
-
-  HTTP Request:
-  GET http://rmhost.domain:8088/ws/v1/cluster/app/application_1324057493980_0001
-
-  Response Status Line:
-  HTTP/1.1 200 OK
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-  app":
-  {
-    "id":"application_1324057493980_0001",
-    "user":"user1",
-    "name":"",
-    "queue":"default",
-    "state":"ACCEPTED",
-    "finalStatus":"UNDEFINED",
-    "progress":0,
-    "trackingUI":"UNASSIGNED",
-    "diagnostics":"",
-    "clusterId":1324057493980,
-    "startedTime":1324057495921,
-    "finishedTime":0,
-    "elapsedTime":2063,
-    "amContainerLogs":"http:\/\/amNM:2\/node\/containerlogs\/container_1324057493980_0001_01_000001",
-    "amHostHttpAddress":"amNM:2"
-  }
-}
-+---+
-
-*** JSON response with Error response
-
-  Here we request information about an application that doesn't exist yet.
-
-  HTTP Request:
-  GET http://rmhost.domain:8088/ws/v1/cluster/app/application_1324057493980_9999
-
-  Response Status Line:
-  HTTP/1.1 404 Not Found
-
-  Response Header:
-
-+---+
-  HTTP/1.1 404 Not Found
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "RemoteException" : {
-      "javaClassName" : "org.apache.hadoop.yarn.webapp.NotFoundException",
-      "exception" : "NotFoundException",
-      "message" : "java.lang.Exception: app with id: application_1324057493980_9999 not found"
-   }
-}
-+---+
-
-* Example usage
-
-  You can use any number of ways/languages to use the web services REST API's. This example uses the curl command line interface to do the REST GET calls.
-
-  In this example, a user submits a MapReduce application to the ResourceManager using a command like: 
-  
-+---+
-  hadoop jar hadoop-mapreduce-test.jar sleep -Dmapred.job.queue.name=a1 -m 1 -r 1 -rt 1200000 -mt 20
-+---+
-
-  The client prints information about the job submitted along with the application id, similar to:
-
-+---+
-12/01/18 04:25:15 INFO mapred.ResourceMgrDelegate: Submitted application application_1326821518301_0010 to ResourceManager at host.domain.com/10.10.10.10:8032
-12/01/18 04:25:15 INFO mapreduce.Job: Running job: job_1326821518301_0010
-12/01/18 04:25:21 INFO mapred.ClientServiceDelegate: The url to track the job: host.domain.com:8088/proxy/application_1326821518301_0010/
-12/01/18 04:25:22 INFO mapreduce.Job: Job job_1326821518301_0010 running in uber mode : false
-12/01/18 04:25:22 INFO mapreduce.Job:  map 0% reduce 0%
-+---+
-
-  The user then wishes to track the application. The users starts by getting the information about the application from the ResourceManager. Use the --comopressed option to request output compressed. curl handles uncompressing on client side.
-
-+---+
-curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/ws/v1/cluster/apps/application_1326821518301_0010" 
-+---+
-
-  Output:
-
-+---+
-{
-   "app" : {
-      "finishedTime" : 0,
-      "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0010_01_000001",
-      "trackingUI" : "ApplicationMaster",
-      "state" : "RUNNING",
-      "user" : "user1",
-      "id" : "application_1326821518301_0010",
-      "clusterId" : 1326821518301,
-      "finalStatus" : "UNDEFINED",
-      "amHostHttpAddress" : "host.domain.com:8042",
-      "progress" : 82.44703,
-      "name" : "Sleep job",
-      "startedTime" : 1326860715335,
-      "elapsedTime" : 31814,
-      "diagnostics" : "",
-      "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326821518301_0010/",
-      "queue" : "a1"
-   }
-}
-+---+
-
-  The user then wishes to get more details about the running application and goes directly to the MapReduce application master for this application. The ResourceManager lists the trackingUrl that can be used for this application: http://host.domain.com:8088/proxy/application_1326821518301_0010. This could either go to the web browser or use the web service REST API's. The user uses the web services REST API's to get the list of jobs this MapReduce application master is running:
-
-+---+
- curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs"
-+---+
-
-  Output:
-
-+---+
-{
-   "jobs" : {
-      "job" : [
-         {
-            "runningReduceAttempts" : 1,
-            "reduceProgress" : 72.104515,
-            "failedReduceAttempts" : 0,
-            "newMapAttempts" : 0,
-            "mapsRunning" : 0,
-            "state" : "RUNNING",
-            "successfulReduceAttempts" : 0,
-            "reducesRunning" : 1,
-            "acls" : [
-               {
-                  "value" : " ",
-                  "name" : "mapreduce.job.acl-modify-job"
-               },
-               {
-                  "value" : " ",
-                  "name" : "mapreduce.job.acl-view-job"
-               }
-            ],
-            "reducesPending" : 0,
-            "user" : "user1",
-            "reducesTotal" : 1,
-            "mapsCompleted" : 1,
-            "startTime" : 1326860720902,
-            "id" : "job_1326821518301_10_10",
-            "successfulMapAttempts" : 1,
-            "runningMapAttempts" : 0,
-            "newReduceAttempts" : 0,
-            "name" : "Sleep job",
-            "mapsPending" : 0,
-            "elapsedTime" : 64432,
-            "reducesCompleted" : 0,
-            "mapProgress" : 100,
-            "diagnostics" : "",
-            "failedMapAttempts" : 0,
-            "killedReduceAttempts" : 0,
-            "mapsTotal" : 1,
-            "uberized" : false,
-            "killedMapAttempts" : 0,
-            "finishTime" : 0
-         }
-      ]
-   }
-}
-+---+
-
-  The user then wishes to get the task details about the job with job id job_1326821518301_10_10 that was listed above. 
-
-+---+
- curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs/job_1326821518301_10_10/tasks" 
-+---+
-
-  Output:
-
-+---+
-{
-   "tasks" : {
-      "task" : [
-         {
-            "progress" : 100,
-            "elapsedTime" : 5059,
-            "state" : "SUCCEEDED",
-            "startTime" : 1326860725014,
-            "id" : "task_1326821518301_10_10_m_0",
-            "type" : "MAP",
-            "successfulAttempt" : "attempt_1326821518301_10_10_m_0_0",
-            "finishTime" : 1326860730073
-         },
-         {
-            "progress" : 72.104515,
-            "elapsedTime" : 0,
-            "state" : "RUNNING",
-            "startTime" : 1326860732984,
-            "id" : "task_1326821518301_10_10_r_0",
-            "type" : "REDUCE",
-            "successfulAttempt" : "",
-            "finishTime" : 0
-         }
-      ]
-   }
-}
-+---+
-
-  The map task has finished but the reduce task is still running. The users wishes to get the task attempt information for the reduce task task_1326821518301_10_10_r_0, note that the Accept header isn't really required here since JSON is the default output format:
-
-+---+
-  curl --compressed -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs/job_1326821518301_10_10/tasks/task_1326821518301_10_10_r_0/attempts"
-+---+
-
-  Output:
-
-+---+
-{
-   "taskAttempts" : {
-      "taskAttempt" : [
-         {
-            "elapsedMergeTime" : 158,
-            "shuffleFinishTime" : 1326860735378,
-            "assignedContainerId" : "container_1326821518301_0010_01_000003",
-            "progress" : 72.104515,
-            "elapsedTime" : 0,
-            "state" : "RUNNING",
-            "elapsedShuffleTime" : 2394,
-            "mergeFinishTime" : 1326860735536,
-            "rack" : "/10.10.10.0",
-            "elapsedReduceTime" : 0,
-            "nodeHttpAddress" : "host.domain.com:8042",
-            "type" : "REDUCE",
-            "startTime" : 1326860732984,
-            "id" : "attempt_1326821518301_10_10_r_0_0",
-            "finishTime" : 0
-         }
-      ]
-   }
-}
-+---+
-
-  The reduce attempt is still running and the user wishes to see the current counter values for that attempt:
-
-+---+
- curl --compressed -H "Accept: application/json"  -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs/job_1326821518301_10_10/tasks/task_1326821518301_10_10_r_0/attempts/attempt_1326821518301_10_10_r_0_0/counters" 
-+---+
-
-  Output:
-
-+---+
-{
-   "JobTaskAttemptCounters" : {
-      "taskAttemptCounterGroup" : [
-         {
-            "counterGroupName" : "org.apache.hadoop.mapreduce.FileSystemCounter",
-            "counter" : [
-               {
-                  "value" : 4216,
-                  "name" : "FILE_BYTES_READ"
-               }, 
-               {
-                  "value" : 77151,
-                  "name" : "FILE_BYTES_WRITTEN"
-               }, 
-               {
-                  "value" : 0,
-                  "name" : "FILE_READ_OPS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "FILE_LARGE_READ_OPS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "FILE_WRITE_OPS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "HDFS_BYTES_READ"
-               },
-               {
-                  "value" : 0,
-                  "name" : "HDFS_BYTES_WRITTEN"
-               },
-               {
-                  "value" : 0,
-                  "name" : "HDFS_READ_OPS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "HDFS_LARGE_READ_OPS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "HDFS_WRITE_OPS"
-               }
-            ]  
-         }, 
-         {
-            "counterGroupName" : "org.apache.hadoop.mapreduce.TaskCounter",
-            "counter" : [
-               {
-                  "value" : 0,
-                  "name" : "COMBINE_INPUT_RECORDS"
-               }, 
-               {
-                  "value" : 0,
-                  "name" : "COMBINE_OUTPUT_RECORDS"
-               }, 
-               {  
-                  "value" : 1767,
-                  "name" : "REDUCE_INPUT_GROUPS"
-               },
-               {  
-                  "value" : 25104,
-                  "name" : "REDUCE_SHUFFLE_BYTES"
-               },
-               {
-                  "value" : 1767,
-                  "name" : "REDUCE_INPUT_RECORDS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "REDUCE_OUTPUT_RECORDS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "SPILLED_RECORDS"
-               },
-               {
-                  "value" : 1,
-                  "name" : "SHUFFLED_MAPS"
-               },
-               {
-                  "value" : 0,
-                  "name" : "FAILED_SHUFFLE"
-               },
-               {
-                  "value" : 1,
-                  "name" : "MERGED_MAP_OUTPUTS"
-               },
-               {
-                  "value" : 50,
-                  "name" : "GC_TIME_MILLIS"
-               },
-               {
-                  "value" : 1580,
-                  "name" : "CPU_MILLISECONDS"
-               },
-               {
-                  "value" : 141320192,
-                  "name" : "PHYSICAL_MEMORY_BYTES"
-               },
-              {
-                  "value" : 1118552064,
-                  "name" : "VIRTUAL_MEMORY_BYTES"
-               }, 
-               {  
-                  "value" : 73728000,
-                  "name" : "COMMITTED_HEAP_BYTES"
-               }
-            ]
-         },
-         {  
-            "counterGroupName" : "Shuffle Errors",
-            "counter" : [
-               {  
-                  "value" : 0,
-                  "name" : "BAD_ID"
-               },
-               {  
-                  "value" : 0,
-                  "name" : "CONNECTION"
-               },
-               {  
-                  "value" : 0,
-                  "name" : "IO_ERROR"
-               },
-               {  
-                  "value" : 0,
-                  "name" : "WRONG_LENGTH"
-               },
-               {  
-                  "value" : 0,
-                  "name" : "WRONG_MAP"
-               },
-               {  
-                  "value" : 0,
-                  "name" : "WRONG_REDUCE"
-               }
-            ]
-         },
-         {  
-            "counterGroupName" : "org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter",
-            "counter" : [
-              {  
-                  "value" : 0,
-                  "name" : "BYTES_WRITTEN"
-               }
-            ]
-         }
-      ],
-      "id" : "attempt_1326821518301_10_10_r_0_0"
-   }
-}
-+---+
-
-  The job finishes and the user wishes to get the final job information from the history server for this job.  
-
-+---+
-  curl --compressed -X GET "http://host.domain.com:19888/ws/v1/history/mapreduce/jobs/job_1326821518301_10_10" 
-+---+
-
-  Output:
-
-+---+
-{
-   "job" : {
-      "avgReduceTime" : 1250784,
-      "failedReduceAttempts" : 0,
-      "state" : "SUCCEEDED",
-      "successfulReduceAttempts" : 1,
-      "acls" : [
-         {
-            "value" : " ",
-            "name" : "mapreduce.job.acl-modify-job"
-         },
-         {
-            "value" : " ",
-            "name" : "mapreduce.job.acl-view-job"
-         }
-      ],
-      "user" : "user1",
-      "reducesTotal" : 1,
-      "mapsCompleted" : 1,
-      "startTime" : 1326860720902,
-      "id" : "job_1326821518301_10_10",
-      "avgMapTime" : 5059,
-      "successfulMapAttempts" : 1,
-      "name" : "Sleep job",
-      "avgShuffleTime" : 2394,
-      "reducesCompleted" : 1,
-      "diagnostics" : "",
-      "failedMapAttempts" : 0,
-      "avgMergeTime" : 2552,
-      "killedReduceAttempts" : 0,
-      "mapsTotal" : 1,
-      "queue" : "a1",
-      "uberized" : false,
-      "killedMapAttempts" : 0,
-      "finishTime" : 1326861986164
-   }
-}
-+---+
-
-  The user also gets the final applications information from the ResourceManager.
-
-+---+
-  curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/ws/v1/cluster/apps/application_1326821518301_0010" 
-+---+
-
-  Output:
-
-+---+
-{
-   "app" : {
-      "finishedTime" : 1326861991282,
-      "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0010_01_000001",
-      "trackingUI" : "History",
-      "state" : "FINISHED",
-      "user" : "user1",
-      "id" : "application_1326821518301_0010",
-      "clusterId" : 1326821518301,
-      "finalStatus" : "SUCCEEDED",
-      "amHostHttpAddress" : "host.domain.com:8042",
-      "progress" : 100,
-      "name" : "Sleep job",
-      "startedTime" : 1326860715335,
-      "elapsedTime" : 1275947,
-      "diagnostics" : "",
-      "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326821518301_0010/jobhistory/job/job_1326821518301_10_10",
-      "queue" : "a1"
-   }
-}
-+---+


[38/43] hadoop git commit: MAPREDUCE-5657. Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Akira AJISAKA.

Posted by zj...@apache.org.
MAPREDUCE-5657. Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Akira AJISAKA.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/9ae7f9eb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/9ae7f9eb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/9ae7f9eb

Branch: refs/heads/YARN-2928
Commit: 9ae7f9eb7baeb244e1b95aabc93ad8124870b9a9
Parents: 742f9d9
Author: Tsuyoshi Ozawa <oz...@apache.org>
Authored: Tue Mar 3 18:06:26 2015 +0900
Committer: Tsuyoshi Ozawa <oz...@apache.org>
Committed: Tue Mar 3 18:06:26 2015 +0900

----------------------------------------------------------------------
 hadoop-mapreduce-project/CHANGES.txt            |  3 ++
 .../hadoop/mapred/TaskAttemptListenerImpl.java  |  4 +-
 .../hadoop/mapreduce/v2/app/JobEndNotifier.java |  1 -
 .../apache/hadoop/mapreduce/v2/util/MRApps.java |  2 +-
 .../hadoop/filecache/DistributedCache.java      |  2 +-
 .../org/apache/hadoop/mapred/ClusterStatus.java |  4 +-
 .../apache/hadoop/mapred/FileOutputFormat.java  |  2 +-
 .../java/org/apache/hadoop/mapred/IFile.java    |  2 +-
 .../apache/hadoop/mapred/JobACLsManager.java    |  1 -
 .../org/apache/hadoop/mapred/JobClient.java     |  8 ++--
 .../java/org/apache/hadoop/mapred/JobConf.java  | 49 +++++++++-----------
 .../java/org/apache/hadoop/mapred/Mapper.java   |  2 +-
 .../org/apache/hadoop/mapred/QueueManager.java  | 30 ++++++------
 .../org/apache/hadoop/mapred/RecordReader.java  |  2 +-
 .../java/org/apache/hadoop/mapred/Reducer.java  | 14 +++---
 .../hadoop/mapred/TaskUmbilicalProtocol.java    |  1 -
 .../apache/hadoop/mapred/lib/ChainMapper.java   | 40 ++++++++--------
 .../apache/hadoop/mapred/lib/ChainReducer.java  | 44 +++++++++---------
 .../hadoop/mapred/lib/MultipleOutputs.java      | 29 +++++-------
 .../hadoop/mapred/lib/TokenCountMapper.java     |  2 +-
 .../lib/aggregate/ValueAggregatorJob.java       |  2 +-
 .../lib/aggregate/ValueAggregatorReducer.java   |  3 +-
 .../hadoop/mapred/lib/db/DBInputFormat.java     |  4 +-
 .../org/apache/hadoop/mapreduce/Cluster.java    |  1 +
 .../apache/hadoop/mapreduce/ClusterMetrics.java |  6 +--
 .../apache/hadoop/mapreduce/CryptoUtils.java    | 10 ++--
 .../java/org/apache/hadoop/mapreduce/Job.java   |  2 +-
 .../org/apache/hadoop/mapreduce/JobContext.java |  2 -
 .../hadoop/mapreduce/JobSubmissionFiles.java    |  2 +-
 .../org/apache/hadoop/mapreduce/Mapper.java     |  9 ++--
 .../org/apache/hadoop/mapreduce/Reducer.java    | 12 ++---
 .../mapreduce/filecache/DistributedCache.java   |  5 +-
 .../lib/aggregate/ValueAggregatorJob.java       |  2 +-
 .../hadoop/mapreduce/lib/chain/Chain.java       |  4 +-
 .../hadoop/mapreduce/lib/chain/ChainMapper.java | 10 ++--
 .../mapreduce/lib/chain/ChainReducer.java       | 14 +++---
 .../hadoop/mapreduce/lib/db/DBInputFormat.java  |  2 +-
 .../hadoop/mapreduce/lib/db/DBWritable.java     |  2 +-
 .../mapreduce/lib/join/TupleWritable.java       |  2 +-
 .../mapreduce/lib/map/MultithreadedMapper.java  |  6 +--
 .../mapreduce/lib/output/FileOutputFormat.java  |  2 +-
 .../mapreduce/lib/output/MultipleOutputs.java   | 11 ++---
 .../lib/partition/BinaryPartitioner.java        |  2 +-
 .../hadoop/mapreduce/task/JobContextImpl.java   |  2 -
 .../hadoop/mapreduce/RandomTextWriter.java      |  4 +-
 .../apache/hadoop/mapreduce/RandomWriter.java   |  5 +-
 .../hadoop/examples/MultiFileWordCount.java     |  2 +-
 .../apache/hadoop/examples/QuasiMonteCarlo.java |  4 +-
 .../hadoop/examples/RandomTextWriter.java       |  4 +-
 .../apache/hadoop/examples/RandomWriter.java    |  5 +-
 .../apache/hadoop/examples/SecondarySort.java   |  2 +-
 .../org/apache/hadoop/examples/pi/DistBbp.java  |  2 +-
 .../apache/hadoop/examples/pi/math/Modular.java |  2 +-
 .../hadoop/examples/terasort/GenSort.java       |  2 +-
 .../org/apache/hadoop/tools/CopyListing.java    | 14 +++---
 .../java/org/apache/hadoop/tools/DistCp.java    |  4 +-
 .../apache/hadoop/tools/DistCpOptionSwitch.java |  2 +-
 .../org/apache/hadoop/tools/OptionsParser.java  |  2 +-
 .../hadoop/tools/mapred/CopyCommitter.java      |  4 +-
 .../apache/hadoop/tools/mapred/CopyMapper.java  |  5 +-
 .../hadoop/tools/mapred/CopyOutputFormat.java   |  4 +-
 .../tools/mapred/RetriableFileCopyCommand.java  |  3 +-
 .../tools/mapred/UniformSizeInputFormat.java    |  4 +-
 .../tools/mapred/lib/DynamicInputFormat.java    |  4 +-
 .../tools/mapred/lib/DynamicRecordReader.java   | 12 ++---
 .../apache/hadoop/tools/util/DistCpUtils.java   |  2 +-
 .../hadoop/tools/util/RetriableCommand.java     |  2 +-
 .../hadoop/tools/util/ThrottledInputStream.java |  8 ++--
 .../java/org/apache/hadoop/tools/Logalyzer.java |  4 +-
 .../ResourceUsageEmulatorPlugin.java            |  2 +-
 .../fs/swift/http/RestClientBindings.java       |  6 +--
 .../hadoop/fs/swift/http/SwiftRestClient.java   |  6 +--
 .../fs/swift/snative/SwiftNativeFileSystem.java |  6 +--
 .../snative/SwiftNativeFileSystemStore.java     |  6 +--
 .../hadoop/fs/swift/util/SwiftTestUtils.java    |  2 +-
 .../apache/hadoop/tools/rumen/InputDemuxer.java |  4 +-
 .../util/MapReduceJobPropertiesParser.java      |  5 +-
 .../apache/hadoop/tools/rumen/package-info.java |  8 ++--
 78 files changed, 249 insertions(+), 261 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt
index 5fd7d30..5524b14 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -181,6 +181,9 @@ Trunk (Unreleased)
     MAPREDUCE-6234. TestHighRamJob fails due to the change in MAPREDUCE-5785. 
     (Masatake Iwasaki via kasha)
 
+    MAPREDUCE-5657. Fix Javadoc errors caused by incorrect or illegal tags in doc
+    comments. (Akira AJISAKA via ozawa)
+
   BREAKDOWN OF MAPREDUCE-2841 (NATIVE TASK) SUBTASKS
 
     MAPREDUCE-5985. native-task: Fix build on macosx. Contributed by

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
index 5f39edd..c8f2427 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapred/TaskAttemptListenerImpl.java
@@ -174,7 +174,7 @@ public class TaskAttemptListenerImpl extends CompositeService
   /**
    * Child checking whether it can commit.
    * 
-   * <br/>
+   * <br>
    * Commit is a two-phased protocol. First the attempt informs the
    * ApplicationMaster that it is
    * {@link #commitPending(TaskAttemptID, TaskStatus)}. Then it repeatedly polls
@@ -208,7 +208,7 @@ public class TaskAttemptListenerImpl extends CompositeService
    * TaskAttempt is reporting that it is in commit_pending and it is waiting for
    * the commit Response
    * 
-   * <br/>
+   * <br>
    * Commit it a two-phased protocol. First the attempt informs the
    * ApplicationMaster that it is
    * {@link #commitPending(TaskAttemptID, TaskStatus)}. Then it repeatedly polls

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/JobEndNotifier.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/JobEndNotifier.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/JobEndNotifier.java
index 981e6ff..05bb40b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/JobEndNotifier.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/JobEndNotifier.java
@@ -44,7 +44,6 @@ import org.mortbay.log.Log;
  * proxy if needed</li><li>
  * The URL may contain sentinels which will be replaced by jobId and jobStatus 
  * (eg. SUCCEEDED/KILLED/FAILED) </li> </ul>
- * </p>
  */
 public class JobEndNotifier implements Configurable {
   private static final String JOB_ID = "$jobId";

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
index 1520fc8..e4b43b5 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
@@ -345,7 +345,7 @@ public class MRApps extends Apps {
    * {@link MRJobConfig#MAPREDUCE_JOB_CLASSLOADER} is set to true, and
    * the APP_CLASSPATH environment variable is set.
    * @param conf
-   * @returns the created job classloader, or null if the job classloader is not
+   * @return the created job classloader, or null if the job classloader is not
    * enabled or the APP_CLASSPATH environment variable is not set
    * @throws IOException
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java
index 370d67d..0783eb5 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/filecache/DistributedCache.java
@@ -113,7 +113,7 @@ import org.apache.hadoop.mapreduce.MRJobConfig;
  *       }
  *     }
  *     
- * </pre></blockquote></p>
+ * </pre></blockquote>
  *
  * It is also very common to use the DistributedCache by using
  * {@link org.apache.hadoop.util.GenericOptionsParser}.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java
index 8b56787..904897b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/ClusterStatus.java
@@ -48,7 +48,7 @@ import org.apache.hadoop.util.StringInterner;
  *   Task capacity of the cluster. 
  *   </li>
  *   <li>
- *   The number of currently running map & reduce tasks.
+ *   The number of currently running map and reduce tasks.
  *   </li>
  *   <li>
  *   State of the <code>JobTracker</code>.
@@ -56,7 +56,7 @@ import org.apache.hadoop.util.StringInterner;
  *   <li>
  *   Details regarding black listed trackers.
  *   </li>
- * </ol></p>
+ * </ol>
  * 
  * <p>Clients can query for the latest <code>ClusterStatus</code>, via 
  * {@link JobClient#getClusterStatus()}.</p>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
index 721c8a8..821c1e8 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/FileOutputFormat.java
@@ -179,7 +179,7 @@ public abstract class FileOutputFormat<K, V> implements OutputFormat<K, V> {
    *  Get the {@link Path} to the task's temporary output directory 
    *  for the map-reduce job
    *  
-   * <h4 id="SideEffectFiles">Tasks' Side-Effect Files</h4>
+   * <b id="SideEffectFiles">Tasks' Side-Effect Files</b>
    * 
    * <p><i>Note:</i> The following is valid only if the {@link OutputCommitter}
    *  is {@link FileOutputCommitter}. If <code>OutputCommitter</code> is not 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
index 30ebd6b..32e07e7 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/IFile.java
@@ -47,7 +47,7 @@ import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 
 /**
- * <code>IFile</code> is the simple <key-len, value-len, key, value> format
+ * <code>IFile</code> is the simple &lt;key-len, value-len, key, value&gt; format
  * for the intermediate map-outputs in Map-Reduce.
  *
  * There is a <code>Writer</code> to write out map-outputs in this format and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
index 37633ab..0dbbe5a 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobACLsManager.java
@@ -101,7 +101,6 @@ public class JobACLsManager {
    * @param jobOperation
    * @param jobOwner
    * @param jobACL
-   * @throws AccessControlException
    */
   public boolean checkAccess(UserGroupInformation callerUGI,
       JobACL jobOperation, String jobOwner, AccessControlList jobACL) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java
index 89a966e..e91fbfe 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobClient.java
@@ -79,7 +79,7 @@ import org.apache.hadoop.util.ToolRunner;
  *   Submitting the job to the cluster and optionally monitoring
  *   it's status.
  *   </li>
- * </ol></p>
+ * </ol>
  *  
  * Normally the user creates the application, describes various facets of the
  * job via {@link JobConf} and then uses the <code>JobClient</code> to submit 
@@ -101,9 +101,9 @@ import org.apache.hadoop.util.ToolRunner;
  *
  *     // Submit the job, then poll for progress until the job is complete
  *     JobClient.runJob(job);
- * </pre></blockquote></p>
+ * </pre></blockquote>
  * 
- * <h4 id="JobControl">Job Control</h4>
+ * <b id="JobControl">Job Control</b>
  * 
  * <p>At times clients would chain map-reduce jobs to accomplish complex tasks 
  * which cannot be done via a single map-reduce job. This is fairly easy since 
@@ -127,7 +127,7 @@ import org.apache.hadoop.util.ToolRunner;
  *   {@link JobConf#setJobEndNotificationURI(String)} : setup a notification
  *   on job-completion, thus avoiding polling.
  *   </li>
- * </ol></p>
+ * </ol>
  * 
  * @see JobConf
  * @see ClusterStatus

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
index 315c829..c388bda 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobConf.java
@@ -74,7 +74,7 @@ import org.apache.log4j.Level;
  *   more complex for the user to control finely
  *   (e.g. {@link #setNumMapTasks(int)}).
  *   </li>
- * </ol></p>
+ * </ol>
  * 
  * <p><code>JobConf</code> typically specifies the {@link Mapper}, combiner 
  * (if any), {@link Partitioner}, {@link Reducer}, {@link InputFormat} and 
@@ -105,7 +105,7 @@ import org.apache.log4j.Level;
  *     
  *     job.setInputFormat(SequenceFileInputFormat.class);
  *     job.setOutputFormat(SequenceFileOutputFormat.class);
- * </pre></blockquote></p>
+ * </pre></blockquote>
  * 
  * @see JobClient
  * @see ClusterStatus
@@ -486,7 +486,7 @@ public class JobConf extends Configuration {
 
   /** A new map/reduce configuration where the behavior of reading from the
    * default resources can be turned off.
-   * <p/>
+   * <p>
    * If the parameter {@code loadDefaults} is false, the new instance
    * will not load resources from the default files.
    *
@@ -993,19 +993,19 @@ public class JobConf extends Configuration {
   /**
    * Set the user defined {@link RawComparator} comparator for
    * grouping keys in the input to the combiner.
-   * <p/>
+   *
    * <p>This comparator should be provided if the equivalence rules for keys
    * for sorting the intermediates are different from those for grouping keys
    * before each call to
    * {@link Reducer#reduce(Object, java.util.Iterator, OutputCollector, Reporter)}.</p>
-   * <p/>
+   *
    * <p>For key-value pairs (K1,V1) and (K2,V2), the values (V1, V2) are passed
    * in a single call to the reduce function if K1 and K2 compare as equal.</p>
-   * <p/>
+   *
    * <p>Since {@link #setOutputKeyComparatorClass(Class)} can be used to control
    * how keys are sorted, this can be used in conjunction to simulate
    * <i>secondary sort on values</i>.</p>
-   * <p/>
+   *
    * <p><i>Note</i>: This is not a guarantee of the combiner sort being
    * <i>stable</i> in any sense. (In any case, with the order of available
    * map-outputs to the combiner being non-deterministic, it wouldn't make
@@ -1210,7 +1210,7 @@ public class JobConf extends Configuration {
    *   <li> be side-effect free</li>
    *   <li> have the same input and output key types and the same input and 
    *        output value types</li>
-   * </ul></p>
+   * </ul>
    * 
    * <p>Typically the combiner is same as the <code>Reducer</code> for the  
    * job i.e. {@link #setReducerClass(Class)}.</p>
@@ -1309,7 +1309,7 @@ public class JobConf extends Configuration {
    * A custom {@link InputFormat} is typically used to accurately control 
    * the number of map tasks for the job.</p>
    * 
-   * <h4 id="NoOfMaps">How many maps?</h4>
+   * <b id="NoOfMaps">How many maps?</b>
    * 
    * <p>The number of maps is usually driven by the total size of the inputs 
    * i.e. total number of blocks of the input files.</p>
@@ -1350,7 +1350,7 @@ public class JobConf extends Configuration {
   /**
    * Set the requisite number of reduce tasks for this job.
    * 
-   * <h4 id="NoOfReduces">How many reduces?</h4>
+   * <b id="NoOfReduces">How many reduces?</b>
    * 
    * <p>The right number of reduces seems to be <code>0.95</code> or 
    * <code>1.75</code> multiplied by (&lt;<i>no. of nodes</i>&gt; * 
@@ -1370,7 +1370,7 @@ public class JobConf extends Configuration {
    * reserve a few reduce slots in the framework for speculative-tasks, failures
    * etc.</p> 
    *
-   * <h4 id="ReducerNone">Reducer NONE</h4>
+   * <b id="ReducerNone">Reducer NONE</b>
    * 
    * <p>It is legal to set the number of reduce-tasks to <code>zero</code>.</p>
    * 
@@ -1693,9 +1693,9 @@ public class JobConf extends Configuration {
    * given task's stdout, stderr, syslog, jobconf files as arguments.</p>
    * 
    * <p>The debug command, run on the node where the map failed, is:</p>
-   * <p><pre><blockquote> 
+   * <p><blockquote><pre>
    * $script $stdout $stderr $syslog $jobconf.
-   * </blockquote></pre></p>
+   * </pre></blockquote>
    * 
    * <p> The script file is distributed through {@link DistributedCache} 
    * APIs. The script needs to be symlinked. </p>
@@ -1705,7 +1705,7 @@ public class JobConf extends Configuration {
    * job.setMapDebugScript("./myscript");
    * DistributedCache.createSymlink(job);
    * DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
-   * </pre></blockquote></p>
+   * </pre></blockquote>
    * 
    * @param mDbgScript the script name
    */
@@ -1730,9 +1730,9 @@ public class JobConf extends Configuration {
    * is given task's stdout, stderr, syslog, jobconf files as arguments.</p>
    * 
    * <p>The debug command, run on the node where the map failed, is:</p>
-   * <p><pre><blockquote> 
+   * <p><blockquote><pre>
    * $script $stdout $stderr $syslog $jobconf.
-   * </blockquote></pre></p>
+   * </pre></blockquote>
    * 
    * <p> The script file is distributed through {@link DistributedCache} 
    * APIs. The script file needs to be symlinked </p>
@@ -1742,7 +1742,7 @@ public class JobConf extends Configuration {
    * job.setReduceDebugScript("./myscript");
    * DistributedCache.createSymlink(job);
    * DistributedCache.addCacheFile("/debug/scripts/myscript#myscript");
-   * </pre></blockquote></p>
+   * </pre></blockquote>
    * 
    * @param rDbgScript the script name
    */
@@ -1785,8 +1785,6 @@ public class JobConf extends Configuration {
    * 
    * @param uri the job end notification uri
    * @see JobStatus
-   * @see <a href="{@docRoot}/org/apache/hadoop/mapred/JobClient.html#
-   *       JobCompletionAndChaining">Job Completion and Chaining</a>
    */
   public void setJobEndNotificationURI(String uri) {
     set(JobContext.MR_JOB_END_NOTIFICATION_URL, uri);
@@ -1816,7 +1814,7 @@ public class JobConf extends Configuration {
    * 
    * If a value is specified in the configuration, it is returned.
    * Else, it returns {@link JobContext#DEFAULT_MAP_MEMORY_MB}.
-   * <p/>
+   * <p>
    * For backward compatibility, if the job configuration sets the
    * key {@link #MAPRED_TASK_MAXVMEM_PROPERTY} to a value different
    * from {@link #DISABLED_MEMORY_LIMIT}, that value will be used
@@ -1842,7 +1840,7 @@ public class JobConf extends Configuration {
    * 
    * If a value is specified in the configuration, it is returned.
    * Else, it returns {@link JobContext#DEFAULT_REDUCE_MEMORY_MB}.
-   * <p/>
+   * <p>
    * For backward compatibility, if the job configuration sets the
    * key {@link #MAPRED_TASK_MAXVMEM_PROPERTY} to a value different
    * from {@link #DISABLED_MEMORY_LIMIT}, that value will be used
@@ -1915,7 +1913,6 @@ public class JobConf extends Configuration {
    * 
    * @param my_class the class to find.
    * @return a jar file that contains the class, or null.
-   * @throws IOException
    */
   public static String findContainingJar(Class my_class) {
     return ClassUtil.findContainingJar(my_class);
@@ -1924,10 +1921,10 @@ public class JobConf extends Configuration {
   /**
    * Get the memory required to run a task of this job, in bytes. See
    * {@link #MAPRED_TASK_MAXVMEM_PROPERTY}
-   * <p/>
+   * <p>
    * This method is deprecated. Now, different memory limits can be
    * set for map and reduce tasks of a job, in MB. 
-   * <p/>
+   * <p>
    * For backward compatibility, if the job configuration sets the
    * key {@link #MAPRED_TASK_MAXVMEM_PROPERTY}, that value is returned. 
    * Otherwise, this method will return the larger of the values returned by 
@@ -1953,7 +1950,7 @@ public class JobConf extends Configuration {
   /**
    * Set the maximum amount of memory any task of this job can use. See
    * {@link #MAPRED_TASK_MAXVMEM_PROPERTY}
-   * <p/>
+   * <p>
    * mapred.task.maxvmem is split into
    * mapreduce.map.memory.mb
    * and mapreduce.map.memory.mb,mapred
@@ -2073,7 +2070,7 @@ public class JobConf extends Configuration {
 
   /**
    * Parse the Maximum heap size from the java opts as specified by the -Xmx option
-   * Format: -Xmx<size>[g|G|m|M|k|K]
+   * Format: -Xmx&lt;size&gt;[g|G|m|M|k|K]
    * @param javaOpts String to parse to read maximum heap size
    * @return Maximum heap size in MB or -1 if not specified
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java
index eaa6c2b..ac2c96d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Mapper.java
@@ -117,7 +117,7 @@ import org.apache.hadoop.io.compress.CompressionCodec;
  *         output.collect(key, val);
  *       }
  *     }
- * </pre></blockquote></p>
+ * </pre></blockquote>
  *
  * <p>Applications may write a custom {@link MapRunnable} to exert greater
  * control on map processing e.g. multi-threaded <code>Mapper</code>s etc.</p>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
index 39fae2a..794c55d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueManager.java
@@ -46,20 +46,20 @@ import java.net.URL;
 /**
  * Class that exposes information about queues maintained by the Hadoop
  * Map/Reduce framework.
- * <p/>
+ * <p>
  * The Map/Reduce framework can be configured with one or more queues,
  * depending on the scheduler it is configured with. While some
  * schedulers work only with one queue, some schedulers support multiple
  * queues. Some schedulers also support the notion of queues within
  * queues - a feature called hierarchical queues.
- * <p/>
+ * <p>
  * Queue names are unique, and used as a key to lookup queues. Hierarchical
  * queues are named by a 'fully qualified name' such as q1:q2:q3, where
  * q2 is a child queue of q1 and q3 is a child queue of q2.
- * <p/>
+ * <p>
  * Leaf level queues are queues that contain no queues within them. Jobs
  * can be submitted only to leaf level queues.
- * <p/>
+ * <p>
  * Queues can be configured with various properties. Some of these
  * properties are common to all schedulers, and those are handled by this
  * class. Schedulers might also associate several custom properties with
@@ -69,11 +69,11 @@ import java.net.URL;
  * provided by the framework, but define their own mechanisms. In such cases,
  * it is likely that the name of the queue will be used to relate the
  * common properties of a queue with scheduler specific properties.
- * <p/>
+ * <p>
  * Information related to a queue, such as its name, properties, scheduling
  * information and children are exposed by this class via a serializable
  * class called {@link JobQueueInfo}.
- * <p/>
+ * <p>
  * Queues are configured in the configuration file mapred-queues.xml.
  * To support backwards compatibility, queues can also be configured
  * in mapred-site.xml. However, when configured in the latter, there is
@@ -102,7 +102,7 @@ public class QueueManager {
   /**
    * Factory method to create an appropriate instance of a queue
    * configuration parser.
-   * <p/>
+   * <p>
    * Returns a parser that can parse either the deprecated property
    * style queue configuration in mapred-site.xml, or one that can
    * parse hierarchical queues in mapred-queues.xml. First preference
@@ -157,7 +157,7 @@ public class QueueManager {
   /**
    * Construct a new QueueManager using configuration specified in the passed
    * in {@link org.apache.hadoop.conf.Configuration} object.
-   * <p/>
+   * <p>
    * This instance supports queue configuration specified in mapred-site.xml,
    * but without support for hierarchical queues. If no queue configuration
    * is found in mapred-site.xml, it will then look for site configuration
@@ -173,7 +173,7 @@ public class QueueManager {
   /**
    * Create an instance that supports hierarchical queues, defined in
    * the passed in configuration file.
-   * <p/>
+   * <p>
    * This is mainly used for testing purposes and should not called from
    * production code.
    *
@@ -208,7 +208,7 @@ public class QueueManager {
   /**
    * Return the set of leaf level queues configured in the system to
    * which jobs are submitted.
-   * <p/>
+   * <p>
    * The number of queues configured should be dependent on the Scheduler
    * configured. Note that some schedulers work with only one queue, whereas
    * others can support multiple queues.
@@ -222,7 +222,7 @@ public class QueueManager {
   /**
    * Return true if the given user is part of the ACL for the given
    * {@link QueueACL} name for the given queue.
-   * <p/>
+   * <p>
    * An operation is allowed if all users are provided access for this
    * operation, or if either the user or any of the groups specified is
    * provided access.
@@ -283,7 +283,7 @@ public class QueueManager {
   /**
    * Set a generic Object that represents scheduling information relevant
    * to a queue.
-   * <p/>
+   * <p>
    * A string representation of this Object will be used by the framework
    * to display in user facing applications like the JobTracker web UI and
    * the hadoop CLI.
@@ -323,7 +323,7 @@ public class QueueManager {
 
   /**
    * Refresh acls, state and scheduler properties for the configured queues.
-   * <p/>
+   * <p>
    * This method reloads configuration related to queues, but does not
    * support changes to the list of queues or hierarchy. The expected usage
    * is that an administrator can modify the queue configuration file and
@@ -431,7 +431,7 @@ public class QueueManager {
 
   /**
    * JobQueueInfo for all the queues.
-   * <p/>
+   * <p>
    * Contribs can use this data structure to either create a hierarchy or for
    * traversing.
    * They can also use this to refresh properties in case of refreshQueues
@@ -450,7 +450,7 @@ public class QueueManager {
 
   /**
    * Generates the array of QueueAclsInfo object.
-   * <p/>
+   * <p>
    * The array consists of only those queues for which user has acls.
    *
    * @return QueueAclsInfo[]

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java
index 0c95a14..6e2c89f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/RecordReader.java
@@ -29,7 +29,7 @@ import org.apache.hadoop.classification.InterfaceStability;
  *   
  * <p><code>RecordReader</code>, typically, converts the byte-oriented view of 
  * the input, provided by the <code>InputSplit</code>, and presents a 
- * record-oriented view for the {@link Mapper} & {@link Reducer} tasks for 
+ * record-oriented view for the {@link Mapper} and {@link Reducer} tasks for
  * processing. It thus assumes the responsibility of processing record 
  * boundaries and presenting the tasks with keys and values.</p>
  * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java
index 3fefa4b..962e195 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Reducer.java
@@ -42,7 +42,7 @@ import org.apache.hadoop.io.Closeable;
  * <ol>
  *   <li>
  *   
- *   <h4 id="Shuffle">Shuffle</h4>
+ *   <b id="Shuffle">Shuffle</b>
  *   
  *   <p><code>Reducer</code> is input the grouped output of a {@link Mapper}.
  *   In the phase the framework, for each <code>Reducer</code>, fetches the 
@@ -51,7 +51,7 @@ import org.apache.hadoop.io.Closeable;
  *   </li>
  *   
  *   <li>
- *   <h4 id="Sort">Sort</h4>
+ *   <b id="Sort">Sort</b>
  *   
  *   <p>The framework groups <code>Reducer</code> inputs by <code>key</code>s 
  *   (since different <code>Mapper</code>s may have output the same key) in this
@@ -60,7 +60,7 @@ import org.apache.hadoop.io.Closeable;
  *   <p>The shuffle and sort phases occur simultaneously i.e. while outputs are
  *   being fetched they are merged.</p>
  *      
- *   <h5 id="SecondarySort">SecondarySort</h5>
+ *   <b id="SecondarySort">SecondarySort</b>
  *   
  *   <p>If equivalence rules for keys while grouping the intermediates are 
  *   different from those for grouping keys before reduction, then one may 
@@ -86,11 +86,11 @@ import org.apache.hadoop.io.Closeable;
  *   </li>
  *   
  *   <li>   
- *   <h4 id="Reduce">Reduce</h4>
+ *   <b id="Reduce">Reduce</b>
  *   
  *   <p>In this phase the 
  *   {@link #reduce(Object, Iterator, OutputCollector, Reporter)}
- *   method is called for each <code>&lt;key, (list of values)></code> pair in
+ *   method is called for each <code>&lt;key, (list of values)&gt;</code> pair in
  *   the grouped inputs.</p>
  *   <p>The output of the reduce task is typically written to the 
  *   {@link FileSystem} via 
@@ -156,7 +156,7 @@ import org.apache.hadoop.io.Closeable;
  *         }
  *       }
  *     }
- * </pre></blockquote></p>
+ * </pre></blockquote>
  * 
  * @see Mapper
  * @see Partitioner
@@ -171,7 +171,7 @@ public interface Reducer<K2, V2, K3, V3> extends JobConfigurable, Closeable {
    * <i>Reduces</i> values for a given key.  
    * 
    * <p>The framework calls this method for each 
-   * <code>&lt;key, (list of values)></code> pair in the grouped inputs.
+   * <code>&lt;key, (list of values)&gt;</code> pair in the grouped inputs.
    * Output values must be of the same type as input values.  Input keys must 
    * not be altered. The framework will <b>reuse</b> the key and value objects
    * that are passed into the reduce, therefore the application should clone

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java
index 5df02c7..c3678d6 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskUmbilicalProtocol.java
@@ -178,7 +178,6 @@ public interface TaskUmbilicalProtocol extends VersionedProtocol {
    *
    * @param taskID task's id
    * @return the most recent checkpoint (if any) for this task
-   * @throws IOException
    */
   TaskCheckpointID getCheckpointID(TaskID taskID);
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java
index 14f040a..723a234 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainMapper.java
@@ -29,61 +29,61 @@ import java.io.IOException;
 /**
  * The ChainMapper class allows to use multiple Mapper classes within a single
  * Map task.
- * <p/>
+ * <p>
  * The Mapper classes are invoked in a chained (or piped) fashion, the output of
  * the first becomes the input of the second, and so on until the last Mapper,
  * the output of the last Mapper will be written to the task's output.
- * <p/>
+ * <p>
  * The key functionality of this feature is that the Mappers in the chain do not
  * need to be aware that they are executed in a chain. This enables having
  * reusable specialized Mappers that can be combined to perform composite
  * operations within a single task.
- * <p/>
+ * <p>
  * Special care has to be taken when creating chains that the key/values output
  * by a Mapper are valid for the following Mapper in the chain. It is assumed
  * all Mappers and the Reduce in the chain use maching output and input key and
  * value classes as no conversion is done by the chaining code.
- * <p/>
+ * <p>
  * Using the ChainMapper and the ChainReducer classes is possible to compose
  * Map/Reduce jobs that look like <code>[MAP+ / REDUCE MAP*]</code>. And
  * immediate benefit of this pattern is a dramatic reduction in disk IO.
- * <p/>
+ * <p>
  * IMPORTANT: There is no need to specify the output key/value classes for the
  * ChainMapper, this is done by the addMapper for the last mapper in the chain.
- * <p/>
+ * <p>
  * ChainMapper usage pattern:
- * <p/>
+ * <p>
  * <pre>
  * ...
  * conf.setJobName("chain");
  * conf.setInputFormat(TextInputFormat.class);
  * conf.setOutputFormat(TextOutputFormat.class);
- * <p/>
+ *
  * JobConf mapAConf = new JobConf(false);
  * ...
  * ChainMapper.addMapper(conf, AMap.class, LongWritable.class, Text.class,
  *   Text.class, Text.class, true, mapAConf);
- * <p/>
+ *
  * JobConf mapBConf = new JobConf(false);
  * ...
  * ChainMapper.addMapper(conf, BMap.class, Text.class, Text.class,
  *   LongWritable.class, Text.class, false, mapBConf);
- * <p/>
+ *
  * JobConf reduceConf = new JobConf(false);
  * ...
  * ChainReducer.setReducer(conf, XReduce.class, LongWritable.class, Text.class,
  *   Text.class, Text.class, true, reduceConf);
- * <p/>
+ *
  * ChainReducer.addMapper(conf, CMap.class, Text.class, Text.class,
  *   LongWritable.class, Text.class, false, null);
- * <p/>
+ *
  * ChainReducer.addMapper(conf, DMap.class, LongWritable.class, Text.class,
  *   LongWritable.class, LongWritable.class, true, null);
- * <p/>
+ *
  * FileInputFormat.setInputPaths(conf, inDir);
  * FileOutputFormat.setOutputPath(conf, outDir);
  * ...
- * <p/>
+ *
  * JobClient jc = new JobClient(conf);
  * RunningJob job = jc.submitJob(conf);
  * ...
@@ -95,21 +95,21 @@ public class ChainMapper implements Mapper {
 
   /**
    * Adds a Mapper class to the chain job's JobConf.
-   * <p/>
+   * <p>
    * It has to be specified how key and values are passed from one element of
    * the chain to the next, by value or by reference. If a Mapper leverages the
    * assumed semantics that the key and values are not modified by the collector
    * 'by value' must be used. If the Mapper does not expect this semantics, as
    * an optimization to avoid serialization and deserialization 'by reference'
    * can be used.
-   * <p/>
+   * <p>
    * For the added Mapper the configuration given for it,
    * <code>mapperConf</code>, have precedence over the job's JobConf. This
    * precedence is in effect when the task is running.
-   * <p/>
+   * <p>
    * IMPORTANT: There is no need to specify the output key/value classes for the
    * ChainMapper, this is done by the addMapper for the last mapper in the chain
-   * <p/>
+   * <p>
    *
    * @param job              job's JobConf to add the Mapper class.
    * @param klass            the Mapper class to add.
@@ -148,7 +148,7 @@ public class ChainMapper implements Mapper {
 
   /**
    * Configures the ChainMapper and all the Mappers in the chain.
-   * <p/>
+   * <p>
    * If this method is overriden <code>super.configure(...)</code> should be
    * invoked at the beginning of the overwriter method.
    */
@@ -171,7 +171,7 @@ public class ChainMapper implements Mapper {
 
   /**
    * Closes  the ChainMapper and all the Mappers in the chain.
-   * <p/>
+   * <p>
    * If this method is overriden <code>super.close()</code> should be
    * invoked at the end of the overwriter method.
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java
index 641d82c..6f5b7cd 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/ChainReducer.java
@@ -27,63 +27,63 @@ import java.util.Iterator;
 /**
  * The ChainReducer class allows to chain multiple Mapper classes after a
  * Reducer within the Reducer task.
- * <p/>
+ * <p>
  * For each record output by the Reducer, the Mapper classes are invoked in a
  * chained (or piped) fashion, the output of the first becomes the input of the
  * second, and so on until the last Mapper, the output of the last Mapper will
  * be written to the task's output.
- * <p/>
+ * <p>
  * The key functionality of this feature is that the Mappers in the chain do not
  * need to be aware that they are executed after the Reducer or in a chain.
  * This enables having reusable specialized Mappers that can be combined to
  * perform composite operations within a single task.
- * <p/>
+ * <p>
  * Special care has to be taken when creating chains that the key/values output
  * by a Mapper are valid for the following Mapper in the chain. It is assumed
  * all Mappers and the Reduce in the chain use maching output and input key and
  * value classes as no conversion is done by the chaining code.
- * <p/>
+ * <p>
  * Using the ChainMapper and the ChainReducer classes is possible to compose
  * Map/Reduce jobs that look like <code>[MAP+ / REDUCE MAP*]</code>. And
  * immediate benefit of this pattern is a dramatic reduction in disk IO.
- * <p/>
+ * <p>
  * IMPORTANT: There is no need to specify the output key/value classes for the
  * ChainReducer, this is done by the setReducer or the addMapper for the last
  * element in the chain.
- * <p/>
+ * <p>
  * ChainReducer usage pattern:
- * <p/>
+ * <p>
  * <pre>
  * ...
  * conf.setJobName("chain");
  * conf.setInputFormat(TextInputFormat.class);
  * conf.setOutputFormat(TextOutputFormat.class);
- * <p/>
+ *
  * JobConf mapAConf = new JobConf(false);
  * ...
  * ChainMapper.addMapper(conf, AMap.class, LongWritable.class, Text.class,
  *   Text.class, Text.class, true, mapAConf);
- * <p/>
+ *
  * JobConf mapBConf = new JobConf(false);
  * ...
  * ChainMapper.addMapper(conf, BMap.class, Text.class, Text.class,
  *   LongWritable.class, Text.class, false, mapBConf);
- * <p/>
+ *
  * JobConf reduceConf = new JobConf(false);
  * ...
  * ChainReducer.setReducer(conf, XReduce.class, LongWritable.class, Text.class,
  *   Text.class, Text.class, true, reduceConf);
- * <p/>
+ *
  * ChainReducer.addMapper(conf, CMap.class, Text.class, Text.class,
  *   LongWritable.class, Text.class, false, null);
- * <p/>
+ *
  * ChainReducer.addMapper(conf, DMap.class, LongWritable.class, Text.class,
  *   LongWritable.class, LongWritable.class, true, null);
- * <p/>
+ *
  * FileInputFormat.setInputPaths(conf, inDir);
  * FileOutputFormat.setOutputPath(conf, outDir);
  * ...
- * <p/>
+ *
  * JobClient jc = new JobClient(conf);
  * RunningJob job = jc.submitJob(conf);
  * ...
@@ -95,18 +95,18 @@ public class ChainReducer implements Reducer {
 
   /**
    * Sets the Reducer class to the chain job's JobConf.
-   * <p/>
+   * <p>
    * It has to be specified how key and values are passed from one element of
    * the chain to the next, by value or by reference. If a Reducer leverages the
    * assumed semantics that the key and values are not modified by the collector
    * 'by value' must be used. If the Reducer does not expect this semantics, as
    * an optimization to avoid serialization and deserialization 'by reference'
    * can be used.
-   * <p/>
+   * <p>
    * For the added Reducer the configuration given for it,
    * <code>reducerConf</code>, have precedence over the job's JobConf. This
    * precedence is in effect when the task is running.
-   * <p/>
+   * <p>
    * IMPORTANT: There is no need to specify the output key/value classes for the
    * ChainReducer, this is done by the setReducer or the addMapper for the last
    * element in the chain.
@@ -139,18 +139,18 @@ public class ChainReducer implements Reducer {
 
   /**
    * Adds a Mapper class to the chain job's JobConf.
-   * <p/>
+   * <p>
    * It has to be specified how key and values are passed from one element of
    * the chain to the next, by value or by reference. If a Mapper leverages the
    * assumed semantics that the key and values are not modified by the collector
    * 'by value' must be used. If the Mapper does not expect this semantics, as
    * an optimization to avoid serialization and deserialization 'by reference'
    * can be used.
-   * <p/>
+   * <p>
    * For the added Mapper the configuration given for it,
    * <code>mapperConf</code>, have precedence over the job's JobConf. This
    * precedence is in effect when the task is running.
-   * <p/>
+   * <p>
    * IMPORTANT: There is no need to specify the output key/value classes for the
    * ChainMapper, this is done by the addMapper for the last mapper in the chain
    * .
@@ -191,7 +191,7 @@ public class ChainReducer implements Reducer {
 
   /**
    * Configures the ChainReducer, the Reducer and all the Mappers in the chain.
-   * <p/>
+   * <p>
    * If this method is overriden <code>super.configure(...)</code> should be
    * invoked at the beginning of the overwriter method.
    */
@@ -215,7 +215,7 @@ public class ChainReducer implements Reducer {
 
   /**
    * Closes  the ChainReducer, the Reducer and all the Mappers in the chain.
-   * <p/>
+   * <p>
    * If this method is overriden <code>super.close()</code> should be
    * invoked at the end of the overwriter method.
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java
index 39e80f9..f0f3652 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/MultipleOutputs.java
@@ -31,29 +31,29 @@ import java.util.*;
  * than the job default output via the <code>OutputCollector</code> passed to
  * the <code>map()</code> and <code>reduce()</code> methods of the
  * <code>Mapper</code> and <code>Reducer</code> implementations.
- * <p/>
+ * <p>
  * Each additional output, or named output, may be configured with its own
  * <code>OutputFormat</code>, with its own key class and with its own value
  * class.
- * <p/>
+ * <p>
  * A named output can be a single file or a multi file. The later is refered as
  * a multi named output.
- * <p/>
+ * <p>
  * A multi named output is an unbound set of files all sharing the same
  * <code>OutputFormat</code>, key class and value class configuration.
- * <p/>
+ * <p>
  * When named outputs are used within a <code>Mapper</code> implementation,
  * key/values written to a name output are not part of the reduce phase, only
  * key/values written to the job <code>OutputCollector</code> are part of the
  * reduce phase.
- * <p/>
+ * <p>
  * MultipleOutputs supports counters, by default the are disabled. The counters
  * group is the {@link MultipleOutputs} class name.
  * </p>
  * The names of the counters are the same as the named outputs. For multi
  * named outputs the name of the counter is the concatenation of the named
  * output, and underscore '_' and the multiname.
- * <p/>
+ * <p>
  * Job configuration usage pattern is:
  * <pre>
  *
@@ -82,7 +82,7 @@ import java.util.*;
  *
  * ...
  * </pre>
- * <p/>
+ * <p>
  * Job configuration usage pattern is:
  * <pre>
  *
@@ -271,7 +271,6 @@ public class MultipleOutputs {
 
   /**
    * Adds a named output for the job.
-   * <p/>
    *
    * @param conf              job conf to add the named output
    * @param namedOutput       named output name, it has to be a word, letters
@@ -291,7 +290,6 @@ public class MultipleOutputs {
 
   /**
    * Adds a multi named output for the job.
-   * <p/>
    *
    * @param conf              job conf to add the named output
    * @param namedOutput       named output name, it has to be a word, letters
@@ -311,7 +309,6 @@ public class MultipleOutputs {
 
   /**
    * Adds a named output for the job.
-   * <p/>
    *
    * @param conf              job conf to add the named output
    * @param namedOutput       named output name, it has to be a word, letters
@@ -339,9 +336,9 @@ public class MultipleOutputs {
 
   /**
    * Enables or disables counters for the named outputs.
-   * <p/>
+   * <p>
    * By default these counters are disabled.
-   * <p/>
+   * <p>
    * MultipleOutputs supports counters, by default the are disabled.
    * The counters group is the {@link MultipleOutputs} class name.
    * </p>
@@ -358,9 +355,9 @@ public class MultipleOutputs {
 
   /**
    * Returns if the counters for the named outputs are enabled or not.
-   * <p/>
+   * <p>
    * By default these counters are disabled.
-   * <p/>
+   * <p>
    * MultipleOutputs supports counters, by default the are disabled.
    * The counters group is the {@link MultipleOutputs} class name.
    * </p>
@@ -465,7 +462,6 @@ public class MultipleOutputs {
 
   /**
    * Gets the output collector for a named output.
-   * <p/>
    *
    * @param namedOutput the named output name
    * @param reporter    the reporter
@@ -480,7 +476,6 @@ public class MultipleOutputs {
 
   /**
    * Gets the output collector for a multi named output.
-   * <p/>
    *
    * @param namedOutput the named output name
    * @param multiName   the multi name part
@@ -525,7 +520,7 @@ public class MultipleOutputs {
 
   /**
    * Closes all the opened named outputs.
-   * <p/>
+   * <p>
    * If overriden subclasses must invoke <code>super.close()</code> at the
    * end of their <code>close()</code>
    *

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java
index 8e884ce..75179e1 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/TokenCountMapper.java
@@ -32,7 +32,7 @@ import org.apache.hadoop.mapred.Reporter;
 
 
 /** 
- * A {@link Mapper} that maps text values into <token,freq> pairs.  Uses
+ * A {@link Mapper} that maps text values into &lt;token,freq&gt; pairs.  Uses
  * {@link StringTokenizer} to break text into tokens. 
  */
 @InterfaceAudience.Public

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java
index 8c20723..6251925 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorJob.java
@@ -60,7 +60,7 @@ import org.apache.hadoop.util.GenericOptionsParser;
  * The developer using Aggregate will need only to provide a plugin class
  * conforming to the following interface:
  * 
- * public interface ValueAggregatorDescriptor { public ArrayList<Entry>
+ * public interface ValueAggregatorDescriptor { public ArrayList&lt;Entry&gt;
  * generateKeyValPairs(Object key, Object value); public void
  * configure(JobConfjob); }
  * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java
index a6b3573..2738968 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/aggregate/ValueAggregatorReducer.java
@@ -45,7 +45,8 @@ public class ValueAggregatorReducer<K1 extends WritableComparable,
    *          driven computing is achieved. It is assumed that each aggregator's
    *          getReport method emits appropriate output for the aggregator. This
    *          may be further customiized.
-   * @value the values to be aggregated
+   * @param values
+   *          the values to be aggregated
    */
   public void reduce(Text key, Iterator<Text> values,
                      OutputCollector<Text, Text> output, Reporter reporter) throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java
index 2715705..159919f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/lib/db/DBInputFormat.java
@@ -195,8 +195,8 @@ public class DBInputFormat<T  extends DBWritable>
    * @param inputClass the class object implementing DBWritable, which is the 
    * Java object holding tuple fields.
    * @param tableName The table to read data from
-   * @param conditions The condition which to select data with, eg. '(updated >
-   * 20070101 AND length > 0)'
+   * @param conditions The condition which to select data with, eg. '(updated &gt;
+   * 20070101 AND length &gt; 0)'
    * @param orderBy the fieldNames in the orderBy clause.
    * @param fieldNames The field names in the table
    * @see #setInput(JobConf, Class, String, String)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java
index 60ff715..34353ac 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Cluster.java
@@ -134,6 +134,7 @@ public class Cluster {
   
   /**
    * Close the <code>Cluster</code>.
+   * @throws IOException
    */
   public synchronized void close() throws IOException {
     clientProtocolProvider.close(client);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java
index c4c2778..b5e54b5 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/ClusterMetrics.java
@@ -40,15 +40,15 @@ import org.apache.hadoop.io.Writable;
  *   Slot capacity of the cluster. 
  *   </li>
  *   <li>
- *   The number of currently occupied/reserved map & reduce slots.
+ *   The number of currently occupied/reserved map and reduce slots.
  *   </li>
  *   <li>
- *   The number of currently running map & reduce tasks.
+ *   The number of currently running map and reduce tasks.
  *   </li>
  *   <li>
  *   The number of job submissions.
  *   </li>
- * </ol></p>
+ * </ol>
  * 
  * <p>Clients can query for the latest <code>ClusterMetrics</code>, via 
  * {@link Cluster#getClusterStatus()}.</p>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
index 184cdf0..ef06176 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/CryptoUtils.java
@@ -123,11 +123,11 @@ public class CryptoUtils {
    * "mapreduce.job.encrypted-intermediate-data.buffer.kb" Job configuration
    * variable.
    * 
-   * If the value of 'length' is > -1, The InputStream is additionally wrapped
-   * in a LimitInputStream. CryptoStreams are late buffering in nature. This
-   * means they will always try to read ahead if they can. The LimitInputStream
-   * will ensure that the CryptoStream does not read past the provided length
-   * from the given Input Stream.
+   * If the value of 'length' is &gt; -1, The InputStream is additionally
+   * wrapped in a LimitInputStream. CryptoStreams are late buffering in nature.
+   * This means they will always try to read ahead if they can. The
+   * LimitInputStream will ensure that the CryptoStream does not read past the
+   * provided length from the given Input Stream.
    * 
    * @param conf
    * @param in

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
index 470290c..f404175 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Job.java
@@ -69,7 +69,7 @@ import org.apache.hadoop.yarn.api.records.ReservationId;
  *
  *     // Submit the job, then poll for progress until the job is complete
  *     job.waitForCompletion(true);
- * </pre></blockquote></p>
+ * </pre></blockquote>
  * 
  * 
  */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java
index 836f182..6bd2d1f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobContext.java
@@ -289,7 +289,6 @@ public interface JobContext extends MRJobConfig {
    * Get the timestamps of the archives.  Used by internal
    * DistributedCache and MapReduce code.
    * @return a string array of timestamps 
-   * @throws IOException
    */
   public String[] getArchiveTimestamps();
 
@@ -297,7 +296,6 @@ public interface JobContext extends MRJobConfig {
    * Get the timestamps of the files.  Used by internal
    * DistributedCache and MapReduce code.
    * @return a string array of timestamps 
-   * @throws IOException
    */
   public String[] getFileTimestamps();
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
index 516e661..7125077 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/JobSubmissionFiles.java
@@ -100,7 +100,7 @@ public class JobSubmissionFiles {
 
   /**
    * Initializes the staging directory and returns the path. It also
-   * keeps track of all necessary ownership & permissions
+   * keeps track of all necessary ownership and permissions
    * @param cluster
    * @param conf
    */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java
index 3a6186b..6b4147b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Mapper.java
@@ -42,9 +42,9 @@ import org.apache.hadoop.mapreduce.task.MapContextImpl;
  * 
  * <p>The framework first calls 
  * {@link #setup(org.apache.hadoop.mapreduce.Mapper.Context)}, followed by
- * {@link #map(Object, Object, Context)} 
+ * {@link #map(Object, Object, org.apache.hadoop.mapreduce.Mapper.Context)}
  * for each key/value pair in the <code>InputSplit</code>. Finally 
- * {@link #cleanup(Context)} is called.</p>
+ * {@link #cleanup(org.apache.hadoop.mapreduce.Mapper.Context)} is called.</p>
  * 
  * <p>All intermediate values associated with a given output key are 
  * subsequently grouped by the framework, and passed to a {@link Reducer} to  
@@ -84,9 +84,10 @@ import org.apache.hadoop.mapreduce.task.MapContextImpl;
  *     }
  *   }
  * }
- * </pre></blockquote></p>
+ * </pre></blockquote>
  *
- * <p>Applications may override the {@link #run(Context)} method to exert 
+ * <p>Applications may override the
+ * {@link #run(org.apache.hadoop.mapreduce.Mapper.Context)} method to exert
  * greater control on map processing e.g. multi-threaded <code>Mapper</code>s 
  * etc.</p>
  * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java
index ddf67e1..ab67ab0 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/Reducer.java
@@ -39,14 +39,14 @@ import java.util.Iterator;
  * <ol>
  *   <li>
  *   
- *   <h4 id="Shuffle">Shuffle</h4>
+ *   <b id="Shuffle">Shuffle</b>
  *   
  *   <p>The <code>Reducer</code> copies the sorted output from each 
  *   {@link Mapper} using HTTP across the network.</p>
  *   </li>
  *   
  *   <li>
- *   <h4 id="Sort">Sort</h4>
+ *   <b id="Sort">Sort</b>
  *   
  *   <p>The framework merge sorts <code>Reducer</code> inputs by 
  *   <code>key</code>s 
@@ -55,7 +55,7 @@ import java.util.Iterator;
  *   <p>The shuffle and sort phases occur simultaneously i.e. while outputs are
  *   being fetched they are merged.</p>
  *      
- *   <h5 id="SecondarySort">SecondarySort</h5>
+ *   <b id="SecondarySort">SecondarySort</b>
  *   
  *   <p>To achieve a secondary sort on the values returned by the value 
  *   iterator, the application should extend the key with the secondary
@@ -83,10 +83,10 @@ import java.util.Iterator;
  *   </li>
  *   
  *   <li>   
- *   <h4 id="Reduce">Reduce</h4>
+ *   <b id="Reduce">Reduce</b>
  *   
  *   <p>In this phase the 
- *   {@link #reduce(Object, Iterable, Context)}
+ *   {@link #reduce(Object, Iterable, org.apache.hadoop.mapreduce.Reducer.Context)}
  *   method is called for each <code>&lt;key, (collection of values)&gt;</code> in
  *   the sorted inputs.</p>
  *   <p>The output of the reduce task is typically written to a 
@@ -113,7 +113,7 @@ import java.util.Iterator;
  *     context.write(key, result);
  *   }
  * }
- * </pre></blockquote></p>
+ * </pre></blockquote>
  * 
  * @see Mapper
  * @see Partitioner

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
index 06737c9..51fe69a 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
@@ -115,7 +115,7 @@ import java.net.URI;
  *       }
  *     }
  *     
- * </pre></blockquote></p>
+ * </pre></blockquote>
  *
  * It is also very common to use the DistributedCache by using
  * {@link org.apache.hadoop.util.GenericOptionsParser}.
@@ -235,7 +235,6 @@ public class DistributedCache {
    * DistributedCache and MapReduce code.
    * @param conf The configuration which stored the timestamps
    * @return a long array of timestamps
-   * @throws IOException
    * @deprecated Use {@link JobContext#getArchiveTimestamps()} instead
    */
   @Deprecated
@@ -250,7 +249,6 @@ public class DistributedCache {
    * DistributedCache and MapReduce code.
    * @param conf The configuration which stored the timestamps
    * @return a long array of timestamps
-   * @throws IOException
    * @deprecated Use {@link JobContext#getFileTimestamps()} instead
    */
   @Deprecated
@@ -434,7 +432,6 @@ public class DistributedCache {
    * internal DistributedCache and MapReduce code.
    * @param conf The configuration which stored the timestamps
    * @return a string array of booleans 
-   * @throws IOException
    */
   public static boolean[] getFileVisibilities(Configuration conf) {
     return parseBooleans(conf.getStrings(MRJobConfig.CACHE_FILE_VISIBILITIES));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java
index d8833da..de25f64 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/aggregate/ValueAggregatorJob.java
@@ -60,7 +60,7 @@ import org.apache.hadoop.util.GenericOptionsParser;
  * The developer using Aggregate will need only to provide a plugin class
  * conforming to the following interface:
  * 
- * public interface ValueAggregatorDescriptor { public ArrayList<Entry>
+ * public interface ValueAggregatorDescriptor { public ArrayList&lt;Entry&gt;
  * generateKeyValPairs(Object key, Object value); public void
  * configure(Configuration conf); }
  * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java
index 208616b..1dad13e 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/Chain.java
@@ -600,7 +600,7 @@ public class Chain {
   /**
    * Adds a Mapper class to the chain job.
    * 
-   * <p/>
+   * <p>
    * The configuration properties of the chain job have precedence over the
    * configuration properties of the Mapper.
    * 
@@ -738,7 +738,7 @@ public class Chain {
   /**
    * Sets the Reducer class to the chain job.
    * 
-   * <p/>
+   * <p>
    * The configuration properties of the chain job have precedence over the
    * configuration properties of the Reducer.
    * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java
index c042ff0..c3bf012 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainMapper.java
@@ -57,24 +57,24 @@ import org.apache.hadoop.mapreduce.lib.chain.Chain.ChainBlockingQueue;
  * ChainMapper, this is done by the addMapper for the last mapper in the chain.
  * </p>
  * ChainMapper usage pattern:
- * <p/>
+ * <p>
  * 
  * <pre>
  * ...
  * Job = new Job(conf);
- * <p/>
+ *
  * Configuration mapAConf = new Configuration(false);
  * ...
  * ChainMapper.addMapper(job, AMap.class, LongWritable.class, Text.class,
  *   Text.class, Text.class, true, mapAConf);
- * <p/>
+ *
  * Configuration mapBConf = new Configuration(false);
  * ...
  * ChainMapper.addMapper(job, BMap.class, Text.class, Text.class,
  *   LongWritable.class, Text.class, false, mapBConf);
- * <p/>
+ *
  * ...
- * <p/>
+ *
  * job.waitForComplettion(true);
  * ...
  * </pre>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java
index dc03d5d..1c37587 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/chain/ChainReducer.java
@@ -50,7 +50,7 @@ import java.io.IOException;
  * all Mappers and the Reduce in the chain use matching output and input key and
  * value classes as no conversion is done by the chaining code.
  * </p>
- * </p> Using the ChainMapper and the ChainReducer classes is possible to
+ * <p> Using the ChainMapper and the ChainReducer classes is possible to
  * compose Map/Reduce jobs that look like <code>[MAP+ / REDUCE MAP*]</code>. And
  * immediate benefit of this pattern is a dramatic reduction in disk IO. </p>
  * <p>
@@ -59,26 +59,26 @@ import java.io.IOException;
  * element in the chain.
  * </p>
  * ChainReducer usage pattern:
- * <p/>
+ * <p>
  * 
  * <pre>
  * ...
  * Job = new Job(conf);
  * ....
- * <p/>
+ *
  * Configuration reduceConf = new Configuration(false);
  * ...
  * ChainReducer.setReducer(job, XReduce.class, LongWritable.class, Text.class,
  *   Text.class, Text.class, true, reduceConf);
- * <p/>
+ *
  * ChainReducer.addMapper(job, CMap.class, Text.class, Text.class,
  *   LongWritable.class, Text.class, false, null);
- * <p/>
+ *
  * ChainReducer.addMapper(job, DMap.class, LongWritable.class, Text.class,
  *   LongWritable.class, LongWritable.class, true, null);
- * <p/>
+ *
  * ...
- * <p/>
+ *
  * job.waitForCompletion(true);
  * ...
  * </pre>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
index a6953b7..78c3a0f 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
@@ -319,7 +319,7 @@ public class DBInputFormat<T extends DBWritable>
    * Java object holding tuple fields.
    * @param tableName The table to read data from
    * @param conditions The condition which to select data with, 
-   * eg. '(updated > 20070101 AND length > 0)'
+   * eg. '(updated &gt; 20070101 AND length &gt; 0)'
    * @param orderBy the fieldNames in the orderBy clause.
    * @param fieldNames The field names in the table
    * @see #setInput(Job, Class, String, String)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java
index cc0d30a..5753a3b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBWritable.java
@@ -73,7 +73,7 @@ import org.apache.hadoop.io.Writable;
  *     timestamp = resultSet.getLong(2);
  *   } 
  * }
- * </pre></p>
+ * </pre>
  */
 @InterfaceAudience.Public
 @InterfaceStability.Stable

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java
index af6b3f0..2990ca9 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/join/TupleWritable.java
@@ -144,7 +144,7 @@ public class TupleWritable implements Writable, Iterable<Writable> {
 
   /**
    * Convert Tuple to String as in the following.
-   * <tt>[<child1>,<child2>,...,<childn>]</tt>
+   * <tt>[&lt;child1&gt;,&lt;child2&gt;,...,&lt;childn&gt;]</tt>
    */
   public String toString() {
     StringBuffer buf = new StringBuffer("[");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java
index 814e494..733b18c 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/map/MultithreadedMapper.java
@@ -44,15 +44,15 @@ import java.util.List;
  * Multithreaded implementation for @link org.apache.hadoop.mapreduce.Mapper.
  * <p>
  * It can be used instead of the default implementation,
- * @link org.apache.hadoop.mapred.MapRunner, when the Map operation is not CPU
+ * {@link org.apache.hadoop.mapred.MapRunner}, when the Map operation is not CPU
  * bound in order to improve throughput.
  * <p>
  * Mapper implementations using this MapRunnable must be thread-safe.
  * <p>
  * The Map-Reduce job has to be configured with the mapper to use via 
- * {@link #setMapperClass(Configuration, Class)} and
+ * {@link #setMapperClass(Job, Class)} and
  * the number of thread the thread-pool can use with the
- * {@link #getNumberOfThreads(Configuration) method. The default
+ * {@link #getNumberOfThreads(JobContext)} method. The default
  * value is 10 threads.
  * <p>
  */


[34/43] hadoop git commit: HADOOP-11602. Fix toUpperCase/toLowerCase to use Locale.ENGLISH. (ozawa)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java
index 92a16cd..e6cf16c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/GetConf.java
@@ -34,6 +34,7 @@ import org.apache.hadoop.hdfs.DFSUtil;
 import org.apache.hadoop.hdfs.HdfsConfiguration;
 import org.apache.hadoop.hdfs.DFSUtil.ConfiguredNNAddress;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 
@@ -79,19 +80,19 @@ public class GetConf extends Configured implements Tool {
     private static final Map<String, CommandHandler> map;
     static  {
       map = new HashMap<String, CommandHandler>();
-      map.put(NAMENODE.getName().toLowerCase(), 
+      map.put(StringUtils.toLowerCase(NAMENODE.getName()),
           new NameNodesCommandHandler());
-      map.put(SECONDARY.getName().toLowerCase(),
+      map.put(StringUtils.toLowerCase(SECONDARY.getName()),
           new SecondaryNameNodesCommandHandler());
-      map.put(BACKUP.getName().toLowerCase(), 
+      map.put(StringUtils.toLowerCase(BACKUP.getName()),
           new BackupNodesCommandHandler());
-      map.put(INCLUDE_FILE.getName().toLowerCase(), 
+      map.put(StringUtils.toLowerCase(INCLUDE_FILE.getName()),
           new CommandHandler(DFSConfigKeys.DFS_HOSTS));
-      map.put(EXCLUDE_FILE.getName().toLowerCase(),
+      map.put(StringUtils.toLowerCase(EXCLUDE_FILE.getName()),
           new CommandHandler(DFSConfigKeys.DFS_HOSTS_EXCLUDE));
-      map.put(NNRPCADDRESSES.getName().toLowerCase(),
+      map.put(StringUtils.toLowerCase(NNRPCADDRESSES.getName()),
           new NNRpcAddressesCommandHandler());
-      map.put(CONFKEY.getName().toLowerCase(),
+      map.put(StringUtils.toLowerCase(CONFKEY.getName()),
           new PrintConfKeyCommandHandler());
     }
     
@@ -116,7 +117,7 @@ public class GetConf extends Configured implements Tool {
     }
     
     public static CommandHandler getHandler(String cmd) {
-      return map.get(cmd.toLowerCase());
+      return map.get(StringUtils.toLowerCase(cmd));
     }
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java
index c4b8424..de3aceb 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineEditsViewer/OfflineEditsVisitorFactory.java
@@ -24,6 +24,7 @@ import java.io.OutputStream;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * EditsVisitorFactory for different implementations of EditsVisitor
@@ -43,7 +44,7 @@ public class OfflineEditsVisitorFactory {
    */
   static public OfflineEditsVisitor getEditsVisitor(String filename,
     String processor, boolean printToScreen) throws IOException {
-    if(processor.toLowerCase().equals("binary")) {
+    if(StringUtils.equalsIgnoreCase("binary", processor)) {
       return new BinaryEditsVisitor(filename);
     }
     OfflineEditsVisitor vis;
@@ -59,9 +60,9 @@ public class OfflineEditsVisitorFactory {
         outs[1] = System.out;
         out = new TeeOutputStream(outs);
       }
-      if(processor.toLowerCase().equals("xml")) {
+      if(StringUtils.equalsIgnoreCase("xml", processor)) {
         vis = new XmlEditsVisitor(out);
-      } else if(processor.toLowerCase().equals("stats")) {
+      } else if(StringUtils.equalsIgnoreCase("stats", processor)) {
         vis = new StatisticsEditsVisitor(out);
       } else {
         throw new IOException("Unknown proccesor " + processor +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
index 43fcd69..429b6fc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageHandler.java
@@ -33,6 +33,7 @@ import io.netty.handler.codec.http.QueryStringDecoder;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.hdfs.web.JsonUtil;
+import org.apache.hadoop.util.StringUtils;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
@@ -51,6 +52,7 @@ import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1;
 import static org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.APPLICATION_JSON_UTF8;
 import static org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.WEBHDFS_PREFIX;
 import static org.apache.hadoop.hdfs.server.datanode.web.webhdfs.WebHdfsHandler.WEBHDFS_PREFIX_LENGTH;
+
 /**
  * Implement the read-only WebHDFS API for fsimage.
  */
@@ -141,7 +143,7 @@ class FSImageHandler extends SimpleChannelInboundHandler<HttpRequest> {
   private static String getOp(QueryStringDecoder decoder) {
     Map<String, List<String>> parameters = decoder.parameters();
     return parameters.containsKey("op")
-            ? parameters.get("op").get(0).toUpperCase() : null;
+        ? StringUtils.toUpperCase(parameters.get("op").get(0)) : null;
   }
 
   private static String getPath(QueryStringDecoder decoder)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
index b6ff4b6..5ad1f24 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/AuthFilter.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.server.AuthenticationFilter;
 import org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler;
 import org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Subclass of {@link AuthenticationFilter} that
@@ -96,7 +97,7 @@ public class AuthFilter extends AuthenticationFilter {
 
     final Map<String, List<String>> m = new HashMap<String, List<String>>();
     for(Map.Entry<String, String[]> entry : original.entrySet()) {
-      final String key = entry.getKey().toLowerCase();
+      final String key = StringUtils.toLowerCase(entry.getKey());
       List<String> strings = m.get(key);
       if (strings == null) {
         strings = new ArrayList<String>();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
index 2ae3445..febe125 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/ParamFilter.java
@@ -28,6 +28,7 @@ import com.sun.jersey.spi.container.ContainerRequest;
 import com.sun.jersey.spi.container.ContainerRequestFilter;
 import com.sun.jersey.spi.container.ContainerResponseFilter;
 import com.sun.jersey.spi.container.ResourceFilter;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * A filter to change parameter names to lower cases
@@ -75,7 +76,7 @@ public class ParamFilter implements ResourceFilter {
       final MultivaluedMap<String, String> parameters) {
     UriBuilder b = UriBuilder.fromUri(uri).replaceQuery("");
     for(Map.Entry<String, List<String>> e : parameters.entrySet()) {
-      final String key = e.getKey().toLowerCase();
+      final String key = StringUtils.toLowerCase(e.getKey());
       for(String v : e.getValue()) {
         b = b.queryParam(key, v);
       }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
index 938f7c7..a907404 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
@@ -80,6 +80,7 @@ import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.security.token.TokenSelector;
 import org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSelector;
 import org.apache.hadoop.util.Progressable;
+import org.apache.hadoop.util.StringUtils;
 import org.mortbay.util.ajax.JSON;
 
 import com.google.common.annotations.VisibleForTesting;
@@ -1242,7 +1243,7 @@ public class WebHdfsFileSystem extends FileSystem
     if (query == null) {
       return url;
     }
-    final String lower = query.toLowerCase();
+    final String lower = StringUtils.toLowerCase(query);
     if (!lower.startsWith(OFFSET_PARAM_PREFIX)
         && !lower.contains("&" + OFFSET_PARAM_PREFIX)) {
       return url;
@@ -1253,7 +1254,7 @@ public class WebHdfsFileSystem extends FileSystem
     for(final StringTokenizer st = new StringTokenizer(query, "&");
         st.hasMoreTokens();) {
       final String token = st.nextToken();
-      if (!token.toLowerCase().startsWith(OFFSET_PARAM_PREFIX)) {
+      if (!StringUtils.toLowerCase(token).startsWith(OFFSET_PARAM_PREFIX)) {
         if (b == null) {
           b = new StringBuilder("?").append(token);
         } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
index 1703e3b..60d201b 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumParam.java
@@ -18,6 +18,7 @@
 package org.apache.hadoop.hdfs.web.resources;
 
 import java.util.Arrays;
+import org.apache.hadoop.util.StringUtils;
 
 abstract class EnumParam<E extends Enum<E>> extends Param<E, EnumParam.Domain<E>> {
   EnumParam(final Domain<E> domain, final E value) {
@@ -40,7 +41,7 @@ abstract class EnumParam<E extends Enum<E>> extends Param<E, EnumParam.Domain<E>
 
     @Override
     final E parse(final String str) {
-      return Enum.valueOf(enumClass, str.toUpperCase());
+      return Enum.valueOf(enumClass, StringUtils.toUpperCase(str));
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
index 5adb5a6..c2dfadf 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/resources/EnumSetParam.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.web.resources;
 import java.util.Arrays;
 import java.util.EnumSet;
 import java.util.Iterator;
+import org.apache.hadoop.util.StringUtils;
 
 abstract class EnumSetParam<E extends Enum<E>> extends Param<EnumSet<E>, EnumSetParam.Domain<E>> {
   /** Convert an EnumSet to a string of comma separated values. */
@@ -82,7 +83,7 @@ abstract class EnumSetParam<E extends Enum<E>> extends Param<EnumSet<E>, EnumSet
           i = j > 0 ? j + 1 : 0;
           j = str.indexOf(',', i);
           final String sub = j >= 0? str.substring(i, j): str.substring(i);
-          set.add(Enum.valueOf(enumClass, sub.trim().toUpperCase()));
+          set.add(Enum.valueOf(enumClass, StringUtils.toUpperCase(sub.trim())));
         }
       }
       return set;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java
index ac6acf9..b439a28 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotManager.java
@@ -19,7 +19,6 @@
 package org.apache.hadoop.hdfs.server.namenode.snapshot;
 
 import static org.mockito.Matchers.anyObject;
-import static org.mockito.Matchers.anyString;
 import static org.mockito.Mockito.doReturn;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.spy;
@@ -31,6 +30,7 @@ import org.apache.hadoop.hdfs.server.namenode.FSDirectory;
 import org.apache.hadoop.hdfs.server.namenode.INode;
 import org.apache.hadoop.hdfs.server.namenode.INodeDirectory;
 import org.apache.hadoop.hdfs.server.namenode.INodesInPath;
+import org.apache.hadoop.util.StringUtils;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -70,7 +70,7 @@ public class TestSnapshotManager {
       Assert.fail("Expected SnapshotException not thrown");
     } catch (SnapshotException se) {
       Assert.assertTrue(
-          se.getMessage().toLowerCase().contains("rollover"));
+          StringUtils.toLowerCase(se.getMessage()).contains("rollover"));
     }
 
     // Delete a snapshot to free up a slot.
@@ -86,7 +86,7 @@ public class TestSnapshotManager {
       Assert.fail("Expected SnapshotException not thrown");
     } catch (SnapshotException se) {
       Assert.assertTrue(
-          se.getMessage().toLowerCase().contains("rollover"));
+          StringUtils.toLowerCase(se.getMessage()).contains("rollover"));
     }
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
index aad63d3..a0e7041 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryEventHandler.java
@@ -59,6 +59,7 @@ import org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils;
 import org.apache.hadoop.mapreduce.v2.jobhistory.JobIndexInfo;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.service.AbstractService;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntity;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEvent;
 import org.apache.hadoop.yarn.client.api.TimelineClient;
@@ -711,7 +712,7 @@ public class JobHistoryEventHandler extends AbstractService
   private void processEventForTimelineServer(HistoryEvent event, JobId jobId,
           long timestamp) {
     TimelineEvent tEvent = new TimelineEvent();
-    tEvent.setEventType(event.getEventType().name().toUpperCase());
+    tEvent.setEventType(StringUtils.toUpperCase(event.getEventType().name()));
     tEvent.setTimestamp(timestamp);
     TimelineEntity tEntity = new TimelineEntity();
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
index 53f21db..0f528e4 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/webapp/AppController.java
@@ -22,7 +22,6 @@ import static org.apache.hadoop.yarn.util.StringHelper.join;
 
 import java.io.IOException;
 import java.net.URLDecoder;
-import java.util.Locale;
 
 import javax.servlet.http.HttpServletResponse;
 
@@ -226,8 +225,9 @@ public class AppController extends Controller implements AMParams {
     if (app.getJob() != null) {
       try {
         String tt = $(TASK_TYPE);
-        tt = tt.isEmpty() ? "All" : StringUtils.capitalize(MRApps.taskType(tt).
-            toString().toLowerCase(Locale.US));
+        tt = tt.isEmpty() ? "All" : StringUtils.capitalize(
+            org.apache.hadoop.util.StringUtils.toLowerCase(
+                MRApps.taskType(tt).toString()));
         setTitle(join(tt, " Tasks for ", $(JOB_ID)));
       } catch (Exception e) {
         LOG.error("Failed to render tasks page with task type : "

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java
index 553ba70..5b8d3a7 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/TypeConverter.java
@@ -41,6 +41,7 @@ import org.apache.hadoop.mapreduce.v2.api.records.TaskId;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskState;
 import org.apache.hadoop.mapreduce.v2.api.records.TaskType;
 import org.apache.hadoop.mapreduce.v2.util.MRApps;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationReport;
 import org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport;
@@ -314,7 +315,7 @@ public class TypeConverter {
       QueueState state) {
     org.apache.hadoop.mapreduce.QueueState qState =
       org.apache.hadoop.mapreduce.QueueState.getState(
-        state.toString().toLowerCase());
+          StringUtils.toLowerCase(state.toString()));
     return qState;
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
index 08b44f8..1520fc8 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRApps.java
@@ -303,7 +303,7 @@ public class MRApps extends Apps {
               remoteFS.getWorkingDirectory()));
           String name = (null == u.getFragment())
               ? p.getName() : u.getFragment();
-          if (!name.toLowerCase().endsWith(".jar")) {
+          if (!StringUtils.toLowerCase(name).endsWith(".jar")) {
             linkLookup.put(p, name);
           }
         }
@@ -317,7 +317,7 @@ public class MRApps extends Apps {
         if (name == null) {
           name = p.getName();
         }
-        if(!name.toLowerCase().endsWith(".jar")) {
+        if(!StringUtils.toLowerCase(name).endsWith(".jar")) {
           MRApps.addToEnvironment(
               environment,
               classpathEnvVar,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/TestTypeConverter.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/TestTypeConverter.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/TestTypeConverter.java
index cc42b9c..e36efec 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/TestTypeConverter.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/test/java/org/apache/hadoop/mapreduce/TestTypeConverter.java
@@ -17,6 +17,7 @@
  */
 package org.apache.hadoop.mapreduce;
 
+import org.apache.hadoop.util.StringUtils;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
@@ -151,9 +152,10 @@ public class TestTypeConverter {
         .newRecord(org.apache.hadoop.yarn.api.records.QueueInfo.class);
     queueInfo.setQueueState(org.apache.hadoop.yarn.api.records.QueueState.STOPPED);
     org.apache.hadoop.mapreduce.QueueInfo returned =
-      TypeConverter.fromYarn(queueInfo, new Configuration());
+        TypeConverter.fromYarn(queueInfo, new Configuration());
     Assert.assertEquals("queueInfo translation didn't work.",
-      returned.getState().toString(), queueInfo.getQueueState().toString().toLowerCase());
+        returned.getState().toString(),
+        StringUtils.toLowerCase(queueInfo.getQueueState().toString()));
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
index 5274438..7fa5d02 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/Task.java
@@ -116,7 +116,7 @@ abstract public class Task implements Writable, Configurable {
    * BYTES_READ counter and second one is of the BYTES_WRITTEN counter.
    */
   protected static String[] getFileSystemCounterNames(String uriScheme) {
-    String scheme = uriScheme.toUpperCase();
+    String scheme = StringUtils.toUpperCase(uriScheme);
     return new String[]{scheme+"_BYTES_READ", scheme+"_BYTES_WRITTEN"};
   }
   

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java
index a53b76a..e0e5b79 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java
@@ -25,7 +25,6 @@ import java.util.Arrays;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.ConcurrentSkipListMap;
 import java.util.Iterator;
-import java.util.Locale;
 import java.util.Map;
 
 import com.google.common.base.Joiner;
@@ -42,6 +41,7 @@ import org.apache.hadoop.io.WritableUtils;
 import org.apache.hadoop.mapreduce.Counter;
 import org.apache.hadoop.mapreduce.FileSystemCounter;
 import org.apache.hadoop.mapreduce.util.ResourceBundles;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * An abstract class to provide common implementation of the filesystem
@@ -227,7 +227,7 @@ public abstract class FileSystemCounterGroup<C extends Counter>
   }
 
   private String checkScheme(String scheme) {
-    String fixed = scheme.toUpperCase(Locale.US);
+    String fixed = StringUtils.toUpperCase(scheme);
     String interned = schemes.putIfAbsent(fixed, fixed);
     if (schemes.size() > MAX_NUM_SCHEMES) {
       // mistakes or abuses

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
index eaa5af8..06737c9 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/filecache/DistributedCache.java
@@ -473,7 +473,7 @@ public class DistributedCache {
         if (fragment == null) {
           return false;
         }
-        String lowerCaseFragment = fragment.toLowerCase();
+        String lowerCaseFragment = StringUtils.toLowerCase(fragment);
         if (fragments.contains(lowerCaseFragment)) {
           return false;
         }
@@ -488,7 +488,7 @@ public class DistributedCache {
         if (fragment == null) {
           return false;
         }
-        String lowerCaseFragment = fragment.toLowerCase();
+        String lowerCaseFragment = StringUtils.toLowerCase(fragment);
         if (fragments.contains(lowerCaseFragment)) {
           return false;
         }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
index 00fbeda..a6953b7 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/db/DBInputFormat.java
@@ -45,6 +45,8 @@ import org.apache.hadoop.mapreduce.JobContext;
 import org.apache.hadoop.mapreduce.MRJobConfig;
 import org.apache.hadoop.mapreduce.RecordReader;
 import org.apache.hadoop.mapreduce.TaskAttemptContext;
+import org.apache.hadoop.util.StringUtils;
+
 /**
  * A InputFormat that reads input data from an SQL table.
  * <p>
@@ -162,7 +164,8 @@ public class DBInputFormat<T extends DBWritable>
       this.connection = createConnection();
 
       DatabaseMetaData dbMeta = connection.getMetaData();
-      this.dbProductName = dbMeta.getDatabaseProductName().toUpperCase();
+      this.dbProductName =
+          StringUtils.toUpperCase(dbMeta.getDatabaseProductName());
     }
     catch (Exception ex) {
       throw new RuntimeException(ex);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
index 37ba5b7..3630c64 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/tools/CLI.java
@@ -222,12 +222,14 @@ public class CLI extends Configured implements Tool {
       taskType = argv[2];
       taskState = argv[3];
       displayTasks = true;
-      if (!taskTypes.contains(taskType.toUpperCase())) {
+      if (!taskTypes.contains(
+          org.apache.hadoop.util.StringUtils.toUpperCase(taskType))) {
         System.out.println("Error: Invalid task-type: " + taskType);
         displayUsage(cmd);
         return exitCode;
       }
-      if (!taskStates.contains(taskState.toLowerCase())) {
+      if (!taskStates.contains(
+          org.apache.hadoop.util.StringUtils.toLowerCase(taskState))) {
         System.out.println("Error: Invalid task-state: " + taskState);
         displayUsage(cmd);
         return exitCode;
@@ -593,7 +595,8 @@ public class CLI extends Configured implements Tool {
   throws IOException, InterruptedException {
 	  
     TaskReport[] reports=null;
-    reports = job.getTaskReports(TaskType.valueOf(type.toUpperCase()));
+    reports = job.getTaskReports(TaskType.valueOf(
+        org.apache.hadoop.util.StringUtils.toUpperCase(type)));
     for (TaskReport report : reports) {
       TIPStatus status = report.getCurrentStatus();
       if ((state.equalsIgnoreCase("pending") && status ==TIPStatus.PENDING) ||

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
index d9cd07b..aff117e 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestDFSIO.java
@@ -154,16 +154,16 @@ public class TestDFSIO implements Tool {
     static ByteMultiple parseString(String sMultiple) {
       if(sMultiple == null || sMultiple.isEmpty()) // MB by default
         return MB;
-      String sMU = sMultiple.toUpperCase();
-      if(B.name().toUpperCase().endsWith(sMU))
+      String sMU = StringUtils.toUpperCase(sMultiple);
+      if(StringUtils.toUpperCase(B.name()).endsWith(sMU))
         return B;
-      if(KB.name().toUpperCase().endsWith(sMU))
+      if(StringUtils.toUpperCase(KB.name()).endsWith(sMU))
         return KB;
-      if(MB.name().toUpperCase().endsWith(sMU))
+      if(StringUtils.toUpperCase(MB.name()).endsWith(sMU))
         return MB;
-      if(GB.name().toUpperCase().endsWith(sMU))
+      if(StringUtils.toUpperCase(GB.name()).endsWith(sMU))
         return GB;
-      if(TB.name().toUpperCase().endsWith(sMU))
+      if(StringUtils.toUpperCase(TB.name()).endsWith(sMU))
         return TB;
       throw new IllegalArgumentException("Unsupported ByteMultiple "+sMultiple);
     }
@@ -736,7 +736,7 @@ public class TestDFSIO implements Tool {
     }
 
     for (int i = 0; i < args.length; i++) { // parse command line
-      if (args[i].toLowerCase().startsWith("-read")) {
+      if (StringUtils.toLowerCase(args[i]).startsWith("-read")) {
         testType = TestType.TEST_TYPE_READ;
       } else if (args[i].equalsIgnoreCase("-write")) {
         testType = TestType.TEST_TYPE_WRITE;
@@ -755,9 +755,9 @@ public class TestDFSIO implements Tool {
         testType = TestType.TEST_TYPE_TRUNCATE;
       } else if (args[i].equalsIgnoreCase("-clean")) {
         testType = TestType.TEST_TYPE_CLEANUP;
-      } else if (args[i].toLowerCase().startsWith("-seq")) {
+      } else if (StringUtils.toLowerCase(args[i]).startsWith("-seq")) {
         isSequential = true;
-      } else if (args[i].toLowerCase().startsWith("-compression")) {
+      } else if (StringUtils.toLowerCase(args[i]).startsWith("-compression")) {
         compressionClass = args[++i];
       } else if (args[i].equalsIgnoreCase("-nrfiles")) {
         nrFiles = Integer.parseInt(args[++i]);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestFileSystem.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestFileSystem.java
index 13e27cd..92441ab 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestFileSystem.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/TestFileSystem.java
@@ -49,6 +49,7 @@ import org.apache.hadoop.io.SequenceFile.CompressionType;
 import org.apache.hadoop.mapred.*;
 import org.apache.hadoop.mapred.lib.LongSumReducer;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.StringUtils;
 
 public class TestFileSystem extends TestCase {
   private static final Log LOG = FileSystem.LOG;
@@ -556,7 +557,8 @@ public class TestFileSystem extends TestCase {
   static void checkPath(MiniDFSCluster cluster, FileSystem fileSys) throws IOException {
     InetSocketAddress add = cluster.getNameNode().getNameNodeAddress();
     // Test upper/lower case
-    fileSys.checkPath(new Path("hdfs://" + add.getHostName().toUpperCase() + ":" + add.getPort()));
+    fileSys.checkPath(new Path("hdfs://"
+        + StringUtils.toUpperCase(add.getHostName()) + ":" + add.getPort()));
   }
 
   public void testFsClose() throws Exception {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/Constants.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/Constants.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/Constants.java
index 0642052..57a7163 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/Constants.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/Constants.java
@@ -18,6 +18,8 @@
 
 package org.apache.hadoop.fs.slive;
 
+import org.apache.hadoop.util.StringUtils;
+
 /**
  * Constants used in various places in slive
  */
@@ -35,7 +37,7 @@ class Constants {
   enum Distribution {
     BEG, END, UNIFORM, MID;
     String lowerName() {
-      return this.name().toLowerCase();
+      return StringUtils.toLowerCase(this.name());
     }
   }
 
@@ -45,7 +47,7 @@ class Constants {
   enum OperationType {
     READ, APPEND, RENAME, LS, MKDIR, DELETE, CREATE, TRUNCATE;
     String lowerName() {
-      return this.name().toLowerCase();
+      return StringUtils.toLowerCase(this.name());
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationData.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationData.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationData.java
index b4c98f7..02eca37 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationData.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationData.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.fs.slive;
 
 import org.apache.hadoop.fs.slive.Constants.Distribution;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * This class holds the data representing what an operations distribution and
@@ -52,7 +53,7 @@ class OperationData {
       percent = (Double.parseDouble(pieces[0]) / 100.0d);
     } else if (pieces.length >= 2) {
       percent = (Double.parseDouble(pieces[0]) / 100.0d);
-      distribution = Distribution.valueOf(pieces[1].toUpperCase());
+      distribution = Distribution.valueOf(StringUtils.toUpperCase(pieces[1]));
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationOutput.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationOutput.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationOutput.java
index 57ef017..bca5a1c 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationOutput.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/OperationOutput.java
@@ -19,6 +19,7 @@
 package org.apache.hadoop.fs.slive;
 
 import org.apache.hadoop.io.Text;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * An operation output has the following object format whereby simple types are
@@ -67,7 +68,8 @@ class OperationOutput {
           "Invalid key format - no type seperator - " + TYPE_SEP);
     }
     try {
-      dataType = OutputType.valueOf(key.substring(0, place).toUpperCase());
+      dataType = OutputType.valueOf(
+          StringUtils.toUpperCase(key.substring(0, place)));
     } catch (Exception e) {
       throw new IllegalArgumentException(
           "Invalid key format - invalid output type", e);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/SliveTest.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/SliveTest.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/SliveTest.java
index ce1837f..97360d6 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/SliveTest.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/fs/slive/SliveTest.java
@@ -42,6 +42,7 @@ import org.apache.hadoop.mapred.FileOutputFormat;
 import org.apache.hadoop.mapred.JobClient;
 import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.mapred.TextOutputFormat;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 
@@ -157,7 +158,7 @@ public class SliveTest implements Tool {
     if (val == null) {
       return false;
     }
-    String cleanupOpt = val.toLowerCase().trim();
+    String cleanupOpt = StringUtils.toLowerCase(val).trim();
     if (cleanupOpt.equals("true") || cleanupOpt.equals("1")) {
       return true;
     } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
index f155dae..0a9d0e9 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/io/FileBench.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.io.compress.CompressionCodec;
 import org.apache.hadoop.io.compress.GzipCodec;
 import org.apache.hadoop.io.Text;
 import org.apache.hadoop.mapred.*;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 
@@ -214,23 +215,25 @@ public class FileBench extends Configured implements Tool {
           if (!(fmt == Format.txt || cod == CCodec.pln)) {
             for (CType typ : ct) {
               String fn =
-                fmt.name().toUpperCase() + "_" +
-                cod.name().toUpperCase() + "_" +
-                typ.name().toUpperCase();
+                StringUtils.toUpperCase(fmt.name()) + "_" +
+                StringUtils.toUpperCase(cod.name()) + "_" +
+                StringUtils.toUpperCase(typ.name());
               typ.configure(job);
-              System.out.print(rwop.name().toUpperCase() + " " + fn + ": ");
+              System.out.print(
+                  StringUtils.toUpperCase(rwop.name()) + " " + fn + ": ");
               System.out.println(rwop.exec(fn, job) / 1000 +
                   " seconds");
             }
           } else {
             String fn =
-              fmt.name().toUpperCase() + "_" +
-              cod.name().toUpperCase();
+              StringUtils.toUpperCase(fmt.name()) + "_" +
+              StringUtils.toUpperCase(cod.name());
             Path p = new Path(root, fn);
             if (rwop == RW.r && !fs.exists(p)) {
               fn += cod.getExt();
             }
-            System.out.print(rwop.name().toUpperCase() + " " + fn + ": ");
+            System.out.print(
+                StringUtils.toUpperCase(rwop.name()) + " " + fn + ": ");
             System.out.println(rwop.exec(fn, job) / 1000 +
                 " seconds");
           }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
index 02a083b..d60905e 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMapRed.java
@@ -45,6 +45,7 @@ import org.apache.hadoop.io.SequenceFile.CompressionType;
 import org.apache.hadoop.mapred.lib.IdentityMapper;
 import org.apache.hadoop.mapred.lib.IdentityReducer;
 import org.apache.hadoop.mapreduce.MRConfig;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.junit.After;
@@ -280,7 +281,7 @@ public class TestMapRed extends Configured implements Tool {
     public void map(WritableComparable key, Text value,
                     OutputCollector<Text, Text> output,
                     Reporter reporter) throws IOException {
-      String str = value.toString().toLowerCase();
+      String str = StringUtils.toLowerCase(value.toString());
       output.collect(new Text(str), value);
     }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java
index 270ddc9..8dec39d 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/DBCountPageView.java
@@ -102,7 +102,7 @@ public class DBCountPageView extends Configured implements Tool {
   
   private void createConnection(String driverClassName
       , String url) throws Exception {
-    if(driverClassName.toLowerCase().contains("oracle")) {
+    if(StringUtils.toLowerCase(driverClassName).contains("oracle")) {
       isOracle = true;
     }
     Class.forName(driverClassName);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
----------------------------------------------------------------------
diff --git a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
index f342463..b6a45ec 100644
--- a/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
+++ b/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
@@ -13,6 +13,7 @@
  */
 package org.apache.hadoop.maven.plugin.versioninfo;
 
+import java.util.Locale;
 import org.apache.hadoop.maven.plugin.util.Exec;
 import org.apache.hadoop.maven.plugin.util.FileSetUtils;
 import org.apache.maven.model.FileSet;
@@ -329,7 +330,8 @@ public class VersionInfoMojo extends AbstractMojo {
       }
 
       private String normalizePath(File file) {
-        return file.getPath().toUpperCase().replaceAll("\\\\", "/");
+        return file.getPath().toUpperCase(Locale.ENGLISH)
+            .replaceAll("\\\\", "/");
       }
     });
     byte[] md5 = computeMD5(files);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
index 6bed8bb..c0c03b3 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
@@ -984,8 +984,8 @@ public class AzureNativeFileSystemStore implements NativeFileSystemStore {
   private String verifyAndConvertToStandardFormat(String rawDir) throws URISyntaxException {
     URI asUri = new URI(rawDir);
     if (asUri.getAuthority() == null 
-        || asUri.getAuthority().toLowerCase(Locale.US).equalsIgnoreCase(
-        		sessionUri.getAuthority().toLowerCase(Locale.US))) {
+        || asUri.getAuthority().toLowerCase(Locale.ENGLISH).equalsIgnoreCase(
+      sessionUri.getAuthority().toLowerCase(Locale.ENGLISH))) {
       // Applies to me.
       return trim(asUri.getPath(), "/");
     } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
index 71e84a1..ca7566b 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
@@ -51,6 +51,7 @@ import org.apache.hadoop.tools.DistCpOptions.FileAttribute;
 import org.apache.hadoop.tools.mapred.UniformSizeInputFormat;
 
 import com.google.common.collect.Maps;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * Utility functions used in DistCp.
@@ -121,8 +122,9 @@ public class DistCpUtils {
    */
   public static Class<? extends InputFormat> getStrategy(Configuration conf,
                                                                  DistCpOptions options) {
-    String confLabel = "distcp." +
-        options.getCopyStrategy().toLowerCase(Locale.getDefault()) + ".strategy.impl";
+    String confLabel = "distcp."
+        + StringUtils.toLowerCase(options.getCopyStrategy())
+        + ".strategy" + ".impl";
     return conf.getClass(confLabel, UniformSizeInputFormat.class, InputFormat.class);
   }
 
@@ -221,7 +223,8 @@ public class DistCpUtils {
 
     final boolean preserveXAttrs = attributes.contains(FileAttribute.XATTR);
     if (preserveXAttrs || preserveRawXattrs) {
-      final String rawNS = XAttr.NameSpace.RAW.name().toLowerCase();
+      final String rawNS =
+          StringUtils.toLowerCase(XAttr.NameSpace.RAW.name());
       Map<String, byte[]> srcXAttrs = srcFileStatus.getXAttrs();
       Map<String, byte[]> targetXAttrs = getXAttrs(targetFS, path);
       if (srcXAttrs != null && !srcXAttrs.equals(targetXAttrs)) {
@@ -321,7 +324,8 @@ public class DistCpUtils {
          copyListingFileStatus.setXAttrs(srcXAttrs);
       } else {
         Map<String, byte[]> trgXAttrs = Maps.newHashMap();
-        final String rawNS = XAttr.NameSpace.RAW.name().toLowerCase();
+        final String rawNS =
+            StringUtils.toLowerCase(XAttr.NameSpace.RAW.name());
         for (Map.Entry<String, byte[]> ent : srcXAttrs.entrySet()) {
           final String xattrName = ent.getKey();
           if (xattrName.startsWith(rawNS)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java b/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
index f46c421..8a6819b 100644
--- a/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
+++ b/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/DistCpV1.java
@@ -169,7 +169,9 @@ public class DistCpV1 implements Tool {
 
     final char symbol;
 
-    private FileAttribute() {symbol = toString().toLowerCase().charAt(0);}
+    private FileAttribute() {
+      symbol = StringUtils.toLowerCase(toString()).charAt(0);
+    }
     
     static EnumSet<FileAttribute> parse(String s) {
       if (s == null || s.length() == 0) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/GridmixJobSubmissionPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/GridmixJobSubmissionPolicy.java b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/GridmixJobSubmissionPolicy.java
index 83eb947..b803538 100644
--- a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/GridmixJobSubmissionPolicy.java
+++ b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/GridmixJobSubmissionPolicy.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.mapred.gridmix.Statistics.ClusterStats;
 
 import java.util.concurrent.CountDownLatch;
 import java.io.IOException;
+import org.apache.hadoop.util.StringUtils;
 
 enum GridmixJobSubmissionPolicy {
 
@@ -84,6 +85,6 @@ enum GridmixJobSubmissionPolicy {
   public static GridmixJobSubmissionPolicy getPolicy(
     Configuration conf, GridmixJobSubmissionPolicy defaultPolicy) {
     String policy = conf.get(JOB_SUBMISSION_POLICY, defaultPolicy.name());
-    return valueOf(policy.toUpperCase());
+    return valueOf(StringUtils.toUpperCase(policy));
   }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java
index 7a35b46..967929b 100644
--- a/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java
+++ b/hadoop-tools/hadoop-openstack/src/test/java/org/apache/hadoop/fs/swift/TestSwiftFileSystemExtendedContract.java
@@ -27,12 +27,12 @@ import org.apache.hadoop.fs.swift.http.RestClientBindings;
 import org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem;
 import org.apache.hadoop.fs.swift.util.SwiftTestUtils;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.junit.Test;
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.URI;
-import java.util.Locale;
 
 public class TestSwiftFileSystemExtendedContract extends SwiftFileSystemBaseTest {
 
@@ -115,7 +115,7 @@ public class TestSwiftFileSystemExtendedContract extends SwiftFileSystemBaseTest
   public void testFilesystemIsCaseSensitive() throws Exception {
     String mixedCaseFilename = "/test/UPPER.TXT";
     Path upper = path(mixedCaseFilename);
-    Path lower = path(mixedCaseFilename.toLowerCase(Locale.ENGLISH));
+    Path lower = path(StringUtils.toLowerCase(mixedCaseFilename));
     assertFalse("File exists" + upper, fs.exists(upper));
     assertFalse("File exists" + lower, fs.exists(lower));
     FSDataOutputStream out = fs.create(upper);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/HadoopLogsAnalyzer.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/HadoopLogsAnalyzer.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/HadoopLogsAnalyzer.java
index 47fdb1a..c53a7c2 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/HadoopLogsAnalyzer.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/HadoopLogsAnalyzer.java
@@ -38,6 +38,7 @@ import java.util.regex.Pattern;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.apache.hadoop.util.LineReader;
@@ -319,42 +320,42 @@ public class HadoopLogsAnalyzer extends Configured implements Tool {
     }
 
     for (int i = 0; i < args.length - (inputFilename == null ? 0 : 1); ++i) {
-      if ("-h".equals(args[i].toLowerCase())
-          || "-help".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-h", args[i])
+          || StringUtils.equalsIgnoreCase("-help", args[i])) {
         usage();
         return 0;
       }
 
-      if ("-c".equals(args[i].toLowerCase())
-          || "-collect-prefixes".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-c", args[i])
+          || StringUtils.equalsIgnoreCase("-collect-prefixes", args[i])) {
         collecting = true;
         continue;
       }
 
       // these control the job digest
-      if ("-write-job-trace".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-write-job-trace", args[i])) {
         ++i;
         jobTraceFilename = new Path(args[i]);
         continue;
       }
 
-      if ("-single-line-job-traces".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-single-line-job-traces", args[i])) {
         prettyprintTrace = false;
         continue;
       }
 
-      if ("-omit-task-details".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-omit-task-details", args[i])) {
         omitTaskDetails = true;
         continue;
       }
 
-      if ("-write-topology".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-write-topology", args[i])) {
         ++i;
         topologyFilename = new Path(args[i]);
         continue;
       }
 
-      if ("-job-digest-spectra".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-job-digest-spectra", args[i])) {
         ArrayList<Integer> values = new ArrayList<Integer>();
 
         ++i;
@@ -384,13 +385,13 @@ public class HadoopLogsAnalyzer extends Configured implements Tool {
         continue;
       }
 
-      if ("-d".equals(args[i].toLowerCase())
-          || "-debug".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-d", args[i])
+          || StringUtils.equalsIgnoreCase("-debug", args[i])) {
         debug = true;
         continue;
       }
 
-      if ("-spreads".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-spreads", args[i])) {
         int min = Integer.parseInt(args[i + 1]);
         int max = Integer.parseInt(args[i + 2]);
 
@@ -404,22 +405,22 @@ public class HadoopLogsAnalyzer extends Configured implements Tool {
       }
 
       // These control log-wide CDF outputs
-      if ("-delays".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-delays", args[i])) {
         delays = true;
         continue;
       }
 
-      if ("-runtimes".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-runtimes", args[i])) {
         runtimes = true;
         continue;
       }
 
-      if ("-tasktimes".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-tasktimes", args[i])) {
         collectTaskTimes = true;
         continue;
       }
 
-      if ("-v1".equals(args[i].toLowerCase())) {
+      if (StringUtils.equalsIgnoreCase("-v1", args[i])) {
         version = 1;
         continue;
       }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobBuilder.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobBuilder.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobBuilder.java
index eaa9547..c5ae2fc 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobBuilder.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobBuilder.java
@@ -433,7 +433,7 @@ public class JobBuilder {
       return Values.SUCCESS;
     }
     
-    return Values.valueOf(name.toUpperCase());
+    return Values.valueOf(StringUtils.toUpperCase(name));
   }
 
   private void processTaskUpdatedEvent(TaskUpdatedEvent event) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTask.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTask.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTask.java
index 903d5fb..4a23fa6 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTask.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTask.java
@@ -28,6 +28,7 @@ import org.apache.hadoop.mapreduce.jobhistory.JhCounter;
 import org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup;
 import org.apache.hadoop.mapreduce.jobhistory.JhCounters;
 
+import org.apache.hadoop.util.StringUtils;
 import org.codehaus.jackson.annotate.JsonAnySetter;
 
 /**
@@ -243,7 +244,7 @@ public class LoggedTask implements DeepCompare {
   }
 
   private static String canonicalizeCounterName(String nonCanonicalName) {
-    String result = nonCanonicalName.toLowerCase();
+    String result = StringUtils.toLowerCase(nonCanonicalName);
 
     result = result.replace(' ', '|');
     result = result.replace('-', '|');

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
index d1b365e..c21eb39 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java
@@ -23,6 +23,7 @@ import java.util.List;
 import java.util.Set;
 import java.util.TreeSet;
 
+import org.apache.hadoop.util.StringUtils;
 import org.codehaus.jackson.annotate.JsonAnySetter;
 
 // HACK ALERT!!!  This "should" have have two subclasses, which might be called
@@ -611,7 +612,7 @@ public class LoggedTaskAttempt implements DeepCompare {
   }
   
   private static String canonicalizeCounterName(String nonCanonicalName) {
-    String result = nonCanonicalName.toLowerCase();
+    String result = StringUtils.toLowerCase(nonCanonicalName);
 
     result = result.replace(' ', '|');
     result = result.replace('-', '|');

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/Environment.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/Environment.java b/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/Environment.java
index 98d8aa03..bc92b71 100644
--- a/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/Environment.java
+++ b/hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/Environment.java
@@ -25,6 +25,7 @@ import java.util.*;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.util.StringUtils;
 
 /**
  * This is a class used to get the current environment
@@ -43,7 +44,7 @@ public class Environment extends Properties {
     // http://lopica.sourceforge.net/os.html
     String command = null;
     String OS = System.getProperty("os.name");
-    String lowerOs = OS.toLowerCase();
+    String lowerOs = StringUtils.toLowerCase(OS);
     if (OS.indexOf("Windows") > -1) {
       command = "cmd /C set";
     } else if (lowerOs.indexOf("ix") > -1 || lowerOs.indexOf("linux") > -1

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
index de8f740..108ad0b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/ApplicationCLI.java
@@ -36,6 +36,7 @@ import org.apache.commons.cli.Option;
 import org.apache.commons.cli.Options;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.ToolRunner;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -173,7 +174,7 @@ public class ApplicationCLI extends YarnCLI {
           if (types != null) {
             for (String type : types) {
               if (!type.trim().isEmpty()) {
-                appTypes.add(type.toUpperCase().trim());
+                appTypes.add(StringUtils.toUpperCase(type).trim());
               }
             }
           }
@@ -191,8 +192,8 @@ public class ApplicationCLI extends YarnCLI {
                   break;
                 }
                 try {
-                  appStates.add(YarnApplicationState.valueOf(state
-                      .toUpperCase().trim()));
+                  appStates.add(YarnApplicationState.valueOf(
+                      StringUtils.toUpperCase(state).trim()));
                 } catch (IllegalArgumentException ex) {
                   sysout.println("The application state " + state
                       + " is invalid.");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
index d603626..4f0ddfe 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/NodeCLI.java
@@ -111,7 +111,8 @@ public class NodeCLI extends YarnCLI {
         if (types != null) {
           for (String type : types) {
             if (!type.trim().isEmpty()) {
-              nodeStates.add(NodeState.valueOf(type.trim().toUpperCase()));
+              nodeStates.add(NodeState.valueOf(
+                  org.apache.hadoop.util.StringUtils.toUpperCase(type.trim())));
             }
           }
         }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
index a8996f0..ad009d6 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/impl/pb/GetApplicationsRequestPBImpl.java
@@ -26,6 +26,7 @@ import java.util.Set;
 import org.apache.commons.lang.math.LongRange;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.protocolrecords.ApplicationsRequestScope;
 import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
@@ -213,7 +214,7 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
     // Convert applicationTags to lower case and add
     this.applicationTags = new HashSet<String>();
     for (String tag : tags) {
-      this.applicationTags.add(tag.toLowerCase());
+      this.applicationTags.add(StringUtils.toLowerCase(tag));
     }
   }
 
@@ -258,7 +259,8 @@ public class GetApplicationsRequestPBImpl extends GetApplicationsRequest {
   public void setApplicationStates(Set<String> applicationStates) {
     EnumSet<YarnApplicationState> appStates = null;
     for (YarnApplicationState state : YarnApplicationState.values()) {
-      if (applicationStates.contains(state.name().toLowerCase())) {
+      if (applicationStates.contains(
+          StringUtils.toLowerCase(state.name()))) {
         if (appStates == null) {
           appStates = EnumSet.of(state);
         } else {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
index 303b437..67e3a84 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ApplicationSubmissionContextPBImpl.java
@@ -23,6 +23,7 @@ import java.util.Set;
 
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.ApplicationSubmissionContext;
 import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
@@ -291,7 +292,7 @@ extends ApplicationSubmissionContext {
     // Convert applicationTags to lower case and add
     this.applicationTags = new HashSet<String>();
     for (String tag : tags) {
-      this.applicationTags.add(tag.toLowerCase());
+      this.applicationTags.add(StringUtils.toLowerCase(tag));
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
index 870aa95..bd9c907 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java
@@ -23,7 +23,6 @@ import java.io.FileNotFoundException;
 import java.io.IOException;
 import java.net.URISyntaxException;
 import java.security.PrivilegedExceptionAction;
-import java.util.Locale;
 import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.Future;
@@ -47,6 +46,7 @@ import org.apache.hadoop.fs.permission.FsPermission;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.RunJar;
 import org.apache.hadoop.util.Shell;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.LocalResource;
 import org.apache.hadoop.yarn.api.records.LocalResourceVisibility;
 
@@ -272,7 +272,7 @@ public class FSDownload implements Callable<Path> {
   private long unpack(File localrsrc, File dst) throws IOException {
     switch (resource.getType()) {
     case ARCHIVE: {
-      String lowerDst = dst.getName().toLowerCase(Locale.ENGLISH);
+      String lowerDst = StringUtils.toLowerCase(dst.getName());
       if (lowerDst.endsWith(".jar")) {
         RunJar.unJar(localrsrc, dst);
       } else if (lowerDst.endsWith(".zip")) {
@@ -291,7 +291,7 @@ public class FSDownload implements Callable<Path> {
     }
     break;
     case PATTERN: {
-      String lowerDst = dst.getName().toLowerCase(Locale.ENGLISH);
+      String lowerDst = StringUtils.toLowerCase(dst.getName());
       if (lowerDst.endsWith(".jar")) {
         String p = resource.getPattern();
         RunJar.unJar(localrsrc, dst,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletGen.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletGen.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletGen.java
index c848828..5acb3f3 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletGen.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/hamlet/HamletGen.java
@@ -26,7 +26,6 @@ import java.lang.annotation.Annotation;
 import java.lang.reflect.Method;
 import java.lang.reflect.ParameterizedType;
 import java.lang.reflect.Type;
-import java.util.Locale;
 import java.util.Set;
 import java.util.regex.Pattern;
 
@@ -35,6 +34,7 @@ import org.apache.commons.cli.GnuParser;
 import org.apache.commons.cli.HelpFormatter;
 import org.apache.commons.cli.Options;
 import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.webapp.WebAppException;
 
 import org.slf4j.Logger;
@@ -241,7 +241,7 @@ public class HamletGen {
     puts(indent, "\n",
          "private <T extends _> ", retName, "<T> ", methodName,
          "_(T e, boolean inline) {\n",
-         "  return new ", retName, "<T>(\"", retName.toLowerCase(Locale.US),
+         "  return new ", retName, "<T>(\"", StringUtils.toLowerCase(retName),
          "\", e, opt(", !endTagOptional.contains(retName), ", inline, ",
          retName.equals("PRE"), ")); }");
   }
@@ -258,7 +258,7 @@ public class HamletGen {
       puts(0, ") {");
       puts(indent,
            topMode ? "" : "  closeAttrs();\n",
-           "  return ", retName.toLowerCase(Locale.US), "_(this, ",
+           "  return ", StringUtils.toLowerCase(retName), "_" + "(this, ",
            isInline(className, retName), ");\n", "}");
     } else if (params.length == 1) {
       puts(0, "String selector) {");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
index 68dc84e..06a56d8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/client/binding/RegistryUtils.java
@@ -88,7 +88,8 @@ public class RegistryUtils {
    * @return the converted username
    */
   public static String convertUsername(String username) {
-    String converted= username.toLowerCase(Locale.ENGLISH);
+    String converted =
+        org.apache.hadoop.util.StringUtils.toLowerCase(username);
     int atSymbol = converted.indexOf('@');
     if (atSymbol > 0) {
       converted = converted.substring(0, atSymbol);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
index 2040f57..2af4027 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
@@ -31,6 +31,7 @@ import javax.ws.rs.QueryParam;
 import javax.ws.rs.core.Context;
 import javax.ws.rs.core.MediaType;
 
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.server.api.ApplicationContext;
 import org.apache.hadoop.yarn.server.webapp.WebServices;
@@ -147,7 +148,8 @@ public class AHSWebServices extends WebServices {
     }
     Set<String> appStates = parseQueries(statesQuery, true);
     for (String appState : appStates) {
-      switch (YarnApplicationState.valueOf(appState.toUpperCase())) {
+      switch (YarnApplicationState.valueOf(
+          StringUtils.toUpperCase(appState))) {
         case FINISHED:
         case FAILED:
         case KILLED:

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
index 0907f2c..915e3f2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/webapp/TimelineWebServices.java
@@ -52,6 +52,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Unstable;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineDomain;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineDomains;
 import org.apache.hadoop.yarn.api.records.timeline.TimelineEntities;
@@ -417,7 +418,7 @@ public class TimelineWebServices {
     String[] strs = str.split(delimiter);
     List<Field> fieldList = new ArrayList<Field>();
     for (String s : strs) {
-      s = s.trim().toUpperCase();
+      s = StringUtils.toUpperCase(s.trim());
       if (s.equals("EVENTS")) {
         fieldList.add(Field.EVENTS);
       } else if (s.equals("LASTEVENTONLY")) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
index 385d10a..6d94737 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/WebServices.java
@@ -31,6 +31,7 @@ import javax.ws.rs.WebApplicationException;
 
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.AuthorizationException;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptReport;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -162,9 +163,9 @@ public class WebServices {
         break;
       }
 
-      if (checkAppStates
-          && !appStates.contains(appReport.getYarnApplicationState().toString()
-            .toLowerCase())) {
+      if (checkAppStates &&
+          !appStates.contains(StringUtils.toLowerCase(
+              appReport.getYarnApplicationState().toString()))) {
         continue;
       }
       if (finalStatusQuery != null && !finalStatusQuery.isEmpty()) {
@@ -184,9 +185,9 @@ public class WebServices {
           continue;
         }
       }
-      if (checkAppTypes
-          && !appTypes.contains(appReport.getApplicationType().trim()
-            .toLowerCase())) {
+      if (checkAppTypes &&
+          !appTypes.contains(
+              StringUtils.toLowerCase(appReport.getApplicationType().trim()))) {
         continue;
       }
 
@@ -368,7 +369,8 @@ public class WebServices {
               if (isState) {
                 try {
                   // enum string is in the uppercase
-                  YarnApplicationState.valueOf(paramStr.trim().toUpperCase());
+                  YarnApplicationState.valueOf(
+                      StringUtils.toUpperCase(paramStr.trim()));
                 } catch (RuntimeException e) {
                   YarnApplicationState[] stateArray =
                       YarnApplicationState.values();
@@ -378,7 +380,7 @@ public class WebServices {
                       + allAppStates);
                 }
               }
-              params.add(paramStr.trim().toLowerCase());
+              params.add(StringUtils.toLowerCase(paramStr.trim()));
             }
           }
         }


[27/43] hadoop git commit: YARN-3270. Fix node label expression not getting set in ApplicationSubmissionContext (Rohit Agarwal via wangda)

Posted by zj...@apache.org.
YARN-3270. Fix node label expression not getting set in ApplicationSubmissionContext (Rohit Agarwal via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/abac6eb9
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/abac6eb9
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/abac6eb9

Branch: refs/heads/YARN-2928
Commit: abac6eb9d530bb1e6ff58ec3c75b17d840a0ee3f
Parents: c5eac9c
Author: Wangda Tan <wa...@apache.org>
Authored: Mon Mar 2 17:21:19 2015 -0800
Committer: Wangda Tan <wa...@apache.org>
Committed: Mon Mar 2 17:21:19 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                                   | 3 +++
 .../hadoop/yarn/api/records/ApplicationSubmissionContext.java     | 1 +
 2 files changed, 4 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/abac6eb9/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index c7dac60..d07aa26 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -683,6 +683,9 @@ Release 2.7.0 - UNRELEASED
     all Schedulers even when using ParameterizedSchedulerTestBase. 
     (Anubhav Dhoot via devaraj)
 
+    YARN-3270. Fix node label expression not getting set in 
+    ApplicationSubmissionContext (Rohit Agarwal via wangda)
+
 Release 2.6.0 - 2014-11-18
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/abac6eb9/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
index f1ebbfe..c4014fc 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ApplicationSubmissionContext.java
@@ -155,6 +155,7 @@ public abstract class ApplicationSubmissionContext {
     context.setMaxAppAttempts(maxAppAttempts);
     context.setApplicationType(applicationType);
     context.setKeepContainersAcrossApplicationAttempts(keepContainers);
+    context.setNodeLabelExpression(appLabelExpression);
     context.setAMContainerResourceRequest(resourceRequest);
     return context;
   }


[07/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
new file mode 100644
index 0000000..1812a44
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md
@@ -0,0 +1,233 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop: Fair Scheduler
+======================
+
+* [Purpose](#Purpose)
+* [Introduction](#Introduction)
+* [Hierarchical queues with pluggable policies](#Hierarchical_queues_with_pluggable_policies)
+* [Automatically placing applications in queues](#Automatically_placing_applications_in_queues)
+* [Installation](#Installation)
+* [Configuration](#Configuration)
+    * [Properties that can be placed in yarn-site.xml](#Properties_that_can_be_placed_in_yarn-site.xml)
+    * [Allocation file format](#Allocation_file_format)
+    * [Queue Access Control Lists](#Queue_Access_Control_Lists)
+* [Administration](#Administration)
+    * [Modifying configuration at runtime](#Modifying_configuration_at_runtime)
+    * [Monitoring through web UI](#Monitoring_through_web_UI)
+    * [Moving applications between queues](#Moving_applications_between_queues)
+
+##Purpose
+
+This document describes the `FairScheduler`, a pluggable scheduler for Hadoop that allows YARN applications to share resources in large clusters fairly.
+
+##Introduction
+
+Fair scheduling is a method of assigning resources to applications such that all apps get, on average, an equal share of resources over time. Hadoop NextGen is capable of scheduling multiple resource types. By default, the Fair Scheduler bases scheduling fairness decisions only on memory. It can be configured to schedule with both memory and CPU, using the notion of Dominant Resource Fairness developed by Ghodsi et al. When there is a single app running, that app uses the entire cluster. When other apps are submitted, resources that free up are assigned to the new apps, so that each app eventually on gets roughly the same amount of resources. Unlike the default Hadoop scheduler, which forms a queue of apps, this lets short apps finish in reasonable time while not starving long-lived apps. It is also a reasonable way to share a cluster between a number of users. Finally, fair sharing can also work with app priorities - the priorities are used as weights to determine the fraction of t
 otal resources that each app should get.
+
+The scheduler organizes apps further into "queues", and shares resources fairly between these queues. By default, all users share a single queue, named "default". If an app specifically lists a queue in a container resource request, the request is submitted to that queue. It is also possible to assign queues based on the user name included with the request through configuration. Within each queue, a scheduling policy is used to share resources between the running apps. The default is memory-based fair sharing, but FIFO and multi-resource with Dominant Resource Fairness can also be configured. Queues can be arranged in a hierarchy to divide resources and configured with weights to share the cluster in specific proportions.
+
+In addition to providing fair sharing, the Fair Scheduler allows assigning guaranteed minimum shares to queues, which is useful for ensuring that certain users, groups or production applications always get sufficient resources. When a queue contains apps, it gets at least its minimum share, but when the queue does not need its full guaranteed share, the excess is split between other running apps. This lets the scheduler guarantee capacity for queues while utilizing resources efficiently when these queues don't contain applications.
+
+The Fair Scheduler lets all apps run by default, but it is also possible to limit the number of running apps per user and per queue through the config file. This can be useful when a user must submit hundreds of apps at once, or in general to improve performance if running too many apps at once would cause too much intermediate data to be created or too much context-switching. Limiting the apps does not cause any subsequently submitted apps to fail, only to wait in the scheduler's queue until some of the user's earlier apps finish.
+
+##Hierarchical queues with pluggable policies
+
+The fair scheduler supports hierarchical queues. All queues descend from a queue named "root". Available resources are distributed among the children of the root queue in the typical fair scheduling fashion. Then, the children distribute the resources assigned to them to their children in the same fashion. Applications may only be scheduled on leaf queues. Queues can be specified as children of other queues by placing them as sub-elements of their parents in the fair scheduler allocation file.
+
+A queue's name starts with the names of its parents, with periods as separators. So a queue named "queue1" under the root queue, would be referred to as "root.queue1", and a queue named "queue2" under a queue named "parent1" would be referred to as "root.parent1.queue2". When referring to queues, the root part of the name is optional, so queue1 could be referred to as just "queue1", and a queue2 could be referred to as just "parent1.queue2".
+
+Additionally, the fair scheduler allows setting a different custom policy for each queue to allow sharing the queue's resources in any which way the user wants. A custom policy can be built by extending `org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.SchedulingPolicy`. FifoPolicy, FairSharePolicy (default), and DominantResourceFairnessPolicy are built-in and can be readily used.
+
+Certain add-ons are not yet supported which existed in the original (MR1) Fair Scheduler. Among them, is the use of a custom policies governing priority "boosting" over certain apps.
+
+##Automatically placing applications in queues
+
+The Fair Scheduler allows administrators to configure policies that automatically place submitted applications into appropriate queues. Placement can depend on the user and groups of the submitter and the requested queue passed by the application. A policy consists of a set of rules that are applied sequentially to classify an incoming application. Each rule either places the app into a queue, rejects it, or continues on to the next rule. Refer to the allocation file format below for how to configure these policies.
+
+##Installation
+
+To use the Fair Scheduler first assign the appropriate scheduler class in yarn-site.xml:
+
+    <property>
+      <name>yarn.resourcemanager.scheduler.class</name>
+      <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
+    </property>
+
+##Configuration
+
+Customizing the Fair Scheduler typically involves altering two files. First, scheduler-wide options can be set by adding configuration properties in the yarn-site.xml file in your existing configuration directory. Second, in most cases users will want to create an allocation file listing which queues exist and their respective weights and capacities. The allocation file is reloaded every 10 seconds, allowing changes to be made on the fly.
+
+###Properties that can be placed in yarn-site.xml
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.scheduler.fair.allocation.file` | Path to allocation file. An allocation file is an XML manifest describing queues and their properties, in addition to certain policy defaults. This file must be in the XML format described in the next section. If a relative path is given, the file is searched for on the classpath (which typically includes the Hadoop conf directory). Defaults to fair-scheduler.xml. |
+| `yarn.scheduler.fair.user-as-default-queue` | Whether to use the username associated with the allocation as the default queue name, in the event that a queue name is not specified. If this is set to "false" or unset, all jobs have a shared default queue, named "default". Defaults to true. If a queue placement policy is given in the allocations file, this property is ignored. |
+| `yarn.scheduler.fair.preemption` | Whether to use preemption. Defaults to false. |
+| `yarn.scheduler.fair.preemption.cluster-utilization-threshold` | The utilization threshold after which preemption kicks in. The utilization is computed as the maximum ratio of usage to capacity among all resources. Defaults to 0.8f. |
+| `yarn.scheduler.fair.sizebasedweight` | Whether to assign shares to individual apps based on their size, rather than providing an equal share to all apps regardless of size. When set to true, apps are weighted by the natural logarithm of one plus the app's total requested memory, divided by the natural logarithm of 2. Defaults to false. |
+| `yarn.scheduler.fair.assignmultiple` | Whether to allow multiple container assignments in one heartbeat. Defaults to false. |
+| `yarn.scheduler.fair.max.assign` | If assignmultiple is true, the maximum amount of containers that can be assigned in one heartbeat. Defaults to -1, which sets no limit. |
+| `yarn.scheduler.fair.locality.threshold.node` | For applications that request containers on particular nodes, the number of scheduling opportunities since the last container assignment to wait before accepting a placement on another node. Expressed as a float between 0 and 1, which, as a fraction of the cluster size, is the number of scheduling opportunities to pass up. The default value of -1.0 means don't pass up any scheduling opportunities. |
+| `yarn.scheduler.fair.locality.threshold.rack` | For applications that request containers on particular racks, the number of scheduling opportunities since the last container assignment to wait before accepting a placement on another rack. Expressed as a float between 0 and 1, which, as a fraction of the cluster size, is the number of scheduling opportunities to pass up. The default value of -1.0 means don't pass up any scheduling opportunities. |
+| `yarn.scheduler.fair.allow-undeclared-pools` | If this is true, new queues can be created at application submission time, whether because they are specified as the application's queue by the submitter or because they are placed there by the user-as-default-queue property. If this is false, any time an app would be placed in a queue that is not specified in the allocations file, it is placed in the "default" queue instead. Defaults to true. If a queue placement policy is given in the allocations file, this property is ignored. |
+| `yarn.scheduler.fair.update-interval-ms` | The interval at which to lock the scheduler and recalculate fair shares, recalculate demand, and check whether anything is due for preemption. Defaults to 500 ms. |
+
+###Allocation file format
+
+The allocation file must be in XML format. The format contains five types of elements:
+
+* **Queue elements**: which represent queues. Queue elements can take an optional attribute 'type', which when set to 'parent' makes it a parent queue. This is useful when we want to create a parent queue without configuring any leaf queues. Each queue element may contain the following properties:
+
+    * minResources: minimum resources the queue is entitled to, in the form "X mb, Y vcores". For the single-resource fairness policy, the vcores value is ignored. If a queue's minimum share is not satisfied, it will be offered available resources before any other queue under the same parent. Under the single-resource fairness policy, a queue is considered unsatisfied if its memory usage is below its minimum memory share. Under dominant resource fairness, a queue is considered unsatisfied if its usage for its dominant resource with respect to the cluster capacity is below its minimum share for that resource. If multiple queues are unsatisfied in this situation, resources go to the queue with the smallest ratio between relevant resource usage and minimum. Note that it is possible that a queue that is below its minimum may not immediately get up to its minimum when it submits an application, because already-running jobs may be using those resources.
+
+    * maxResources: maximum resources a queue is allowed, in the form "X mb, Y vcores". For the single-resource fairness policy, the vcores value is ignored. A queue will never be assigned a container that would put its aggregate usage over this limit.
+
+    * maxRunningApps: limit the number of apps from the queue to run at once
+
+    * maxAMShare: limit the fraction of the queue's fair share that can be used to run application masters. This property can only be used for leaf queues. For example, if set to 1.0f, then AMs in the leaf queue can take up to 100% of both the memory and CPU fair share. The value of -1.0f will disable this feature and the amShare will not be checked. The default value is 0.5f.
+
+    * weight: to share the cluster non-proportionally with other queues. Weights default to 1, and a queue with weight 2 should receive approximately twice as many resources as a queue with the default weight.
+
+    * schedulingPolicy: to set the scheduling policy of any queue. The allowed values are "fifo"/"fair"/"drf" or any class that extends `org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.SchedulingPolicy`. Defaults to "fair". If "fifo", apps with earlier submit times are given preference for containers, but apps submitted later may run concurrently if there is leftover space on the cluster after satisfying the earlier app's requests.
+
+    * aclSubmitApps: a list of users and/or groups that can submit apps to the queue. Refer to the ACLs section below for more info on the format of this list and how queue ACLs work.
+
+    * aclAdministerApps: a list of users and/or groups that can administer a queue. Currently the only administrative action is killing an application. Refer to the ACLs section below for more info on the format of this list and how queue ACLs work.
+
+    * minSharePreemptionTimeout: number of seconds the queue is under its minimum share before it will try to preempt containers to take resources from other queues. If not set, the queue will inherit the value from its parent queue.
+
+    * fairSharePreemptionTimeout: number of seconds the queue is under its fair share threshold before it will try to preempt containers to take resources from other queues. If not set, the queue will inherit the value from its parent queue.
+
+    * fairSharePreemptionThreshold: the fair share preemption threshold for the queue. If the queue waits fairSharePreemptionTimeout without receiving fairSharePreemptionThreshold\*fairShare resources, it is allowed to preempt containers to take resources from other queues. If not set, the queue will inherit the value from its parent queue.
+
+* **User elements**: which represent settings governing the behavior of individual users. They can contain a single property: maxRunningApps, a limit on the number of running apps for a particular user.
+
+* **A userMaxAppsDefault element**: which sets the default running app limit for any users whose limit is not otherwise specified.
+
+* **A defaultFairSharePreemptionTimeout element**: which sets the fair share preemption timeout for the root queue; overridden by fairSharePreemptionTimeout element in root queue.
+
+* **A defaultMinSharePreemptionTimeout element**: which sets the min share preemption timeout for the root queue; overridden by minSharePreemptionTimeout element in root queue.
+
+* **A defaultFairSharePreemptionThreshold element**: which sets the fair share preemption threshold for the root queue; overridden by fairSharePreemptionThreshold element in root queue.
+
+* **A queueMaxAppsDefault element**: which sets the default running app limit for queues; overriden by maxRunningApps element in each queue.
+
+* **A queueMaxAMShareDefault element**: which sets the default AM resource limit for queue; overriden by maxAMShare element in each queue.
+
+* **A defaultQueueSchedulingPolicy element**: which sets the default scheduling policy for queues; overriden by the schedulingPolicy element in each queue if specified. Defaults to "fair".
+
+* **A queuePlacementPolicy element**: which contains a list of rule elements that tell the scheduler how to place incoming apps into queues. Rules are applied in the order that they are listed. Rules may take arguments. All rules accept the "create" argument, which indicates whether the rule can create a new queue. "Create" defaults to true; if set to false and the rule would place the app in a queue that is not configured in the allocations file, we continue on to the next rule. The last rule must be one that can never issue a continue. Valid rules are:
+
+    * specified: the app is placed into the queue it requested. If the app requested no queue, i.e. it specified "default", we continue. If the app requested a queue name starting or ending with period, i.e. names like ".q1" or "q1." will be rejected.
+
+    * user: the app is placed into a queue with the name of the user who submitted it. Periods in the username will be replace with "\_dot\_", i.e. the queue name for user "first.last" is "first\_dot\_last".
+
+    * primaryGroup: the app is placed into a queue with the name of the primary group of the user who submitted it. Periods in the group name will be replaced with "\_dot\_", i.e. the queue name for group "one.two" is "one\_dot\_two".
+
+    * secondaryGroupExistingQueue: the app is placed into a queue with a name that matches a secondary group of the user who submitted it. The first secondary group that matches a configured queue will be selected. Periods in group names will be replaced with "\_dot\_", i.e. a user with "one.two" as one of their secondary groups would be placed into the "one\_dot\_two" queue, if such a queue exists.
+
+    * nestedUserQueue : the app is placed into a queue with the name of the user under the queue suggested by the nested rule. This is similar to ‘user’ rule,the difference being in 'nestedUserQueue' rule,user queues can be created under any parent queue, while 'user' rule creates user queues only under root queue. Note that nestedUserQueue rule would be applied only if the nested rule returns a parent queue.One can configure a parent queue either by setting 'type' attribute of queue to 'parent' or by configuring at least one leaf under that queue which makes it a parent. See example allocation for a sample use case.
+
+    * default: the app is placed into the queue specified in the 'queue' attribute of the default rule. If 'queue' attribute is not specified, the app is placed into 'root.default' queue.
+
+    * reject: the app is rejected.
+
+    An example allocation file is given here:
+
+```xml
+<?xml version="1.0"?>
+<allocations>
+  <queue name="sample_queue">
+    <minResources>10000 mb,0vcores</minResources>
+    <maxResources>90000 mb,0vcores</maxResources>
+    <maxRunningApps>50</maxRunningApps>
+    <maxAMShare>0.1</maxAMShare>
+    <weight>2.0</weight>
+    <schedulingPolicy>fair</schedulingPolicy>
+    <queue name="sample_sub_queue">
+      <aclSubmitApps>charlie</aclSubmitApps>
+      <minResources>5000 mb,0vcores</minResources>
+    </queue>
+  </queue>
+
+  <queueMaxAMShareDefault>0.5</queueMaxAMShareDefault>
+
+  <!-- Queue 'secondary_group_queue' is a parent queue and may have
+       user queues under it -->
+  <queue name="secondary_group_queue" type="parent">
+  <weight>3.0</weight>
+  </queue>
+  
+  <user name="sample_user">
+    <maxRunningApps>30</maxRunningApps>
+  </user>
+  <userMaxAppsDefault>5</userMaxAppsDefault>
+  
+  <queuePlacementPolicy>
+    <rule name="specified" />
+    <rule name="primaryGroup" create="false" />
+    <rule name="nestedUserQueue">
+        <rule name="secondaryGroupExistingQueue" create="false" />
+    </rule>
+    <rule name="default" queue="sample_queue"/>
+  </queuePlacementPolicy>
+</allocations>
+```
+
+  Note that for backwards compatibility with the original FairScheduler, "queue" elements can instead be named as "pool" elements.
+
+###Queue Access Control Lists
+
+Queue Access Control Lists (ACLs) allow administrators to control who may take actions on particular queues. They are configured with the aclSubmitApps and aclAdministerApps properties, which can be set per queue. Currently the only supported administrative action is killing an application. Anybody who may administer a queue may also submit applications to it. These properties take values in a format like "user1,user2 group1,group2" or " group1,group2". An action on a queue will be permitted if its user or group is in the ACL of that queue or in the ACL of any of that queue's ancestors. So if queue2 is inside queue1, and user1 is in queue1's ACL, and user2 is in queue2's ACL, then both users may submit to queue2.
+
+**Note:** The delimiter is a space character. To specify only ACL groups, begin the value with a space character.
+
+The root queue's ACLs are "\*" by default which, because ACLs are passed down, means that everybody may submit to and kill applications from every queue. To start restricting access, change the root queue's ACLs to something other than "\*".
+
+##Administration
+
+The fair scheduler provides support for administration at runtime through a few mechanisms:
+
+###Modifying configuration at runtime
+
+It is possible to modify minimum shares, limits, weights, preemption timeouts and queue scheduling policies at runtime by editing the allocation file. The scheduler will reload this file 10-15 seconds after it sees that it was modified.
+
+###Monitoring through web UI
+
+Current applications, queues, and fair shares can be examined through the ResourceManager's web interface, at `http://*ResourceManager URL*/cluster/scheduler`.
+
+The following fields can be seen for each queue on the web interface:
+
+* Used Resources - The sum of resources allocated to containers within the queue.
+
+* Num Active Applications - The number of applications in the queue that have received at least one container.
+
+* Num Pending Applications - The number of applications in the queue that have not yet received any containers.
+
+* Min Resources - The configured minimum resources that are guaranteed to the queue.
+
+* Max Resources - The configured maximum resources that are allowed to the queue.
+
+* Instantaneous Fair Share - The queue's instantaneous fair share of resources. These shares consider only actives queues (those with running applications), and are used for scheduling decisions. Queues may be allocated resources beyond their shares when other queues aren't using them. A queue whose resource consumption lies at or below its instantaneous fair share will never have its containers preempted.
+
+* Steady Fair Share - The queue's steady fair share of resources. These shares consider all the queues irrespective of whether they are active (have running applications) or not. These are computed less frequently and change only when the configuration or capacity changes.They are meant to provide visibility into resources the user can expect, and hence displayed in the Web UI.
+
+###Moving applications between queues
+
+The Fair Scheduler supports moving a running application to a different queue. This can be useful for moving an important application to a higher priority queue, or for moving an unimportant application to a lower priority queue. Apps can be moved by running `yarn application -movetoqueue appID -queue targetQueueName`.
+
+When an application is moved to a queue, its existing allocations become counted with the new queue's allocations instead of the old for purposes of determining fairness. An attempt to move an application to a queue will fail if the addition of the app's resources to that queue would violate the its maxRunningApps or maxResources constraints.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
new file mode 100644
index 0000000..6341c60
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManager.md
@@ -0,0 +1,57 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+NodeManager Overview
+=====================
+
+* [Overview](#Overview)
+* [Health Checker Service](#Health_checker_service)
+    * [Disk Checker](#Disk_Checker)
+    * [External Health Script](#External_Health_Script)
+
+Overview
+--------
+
+The NodeManager is responsible for launching and managing containers on a node. Containers execute tasks as specified by the AppMaster.
+
+Health Checker Service
+----------------------
+
+The NodeManager runs services to determine the health of the node it is executing on. The services perform checks on the disk as well as any user specified tests. If any health check fails, the NodeManager marks the node as unhealthy and communicates this to the ResourceManager, which then stops assigning containers to the node. Communication of the node status is done as part of the heartbeat between the NodeManager and the ResourceManager. The intervals at which the disk checker and health monitor(described below) run don't affect the heartbeat intervals. When the heartbeat takes place, the status of both checks is used to determine the health of the node.
+
+###Disk Checker
+
+  The disk checker checks the state of the disks that the NodeManager is configured to use(local-dirs and log-dirs, configured using yarn.nodemanager.local-dirs and yarn.nodemanager.log-dirs respectively). The checks include permissions and free disk space. It also checks that the filesystem isn't in a read-only state. The checks are run at 2 minute intervals by default but can be configured to run as often as the user desires. If a disk fails the check, the NodeManager stops using that particular disk but still reports the node status as healthy. However if a number of disks fail the check(the number can be configured, as explained below), then the node is reported as unhealthy to the ResourceManager and new containers will not be assigned to the node. In addition, once a disk is marked as unhealthy, the NodeManager stops checking it to see if it has recovered(e.g. disk became full and was then cleaned up). The only way for the NodeManager to use that disk to restart the software o
 n the node. The following configuration parameters can be used to modify the disk checks:
+
+| Configuration Name | Allowed Values | Description |
+|:---- |:---- |:---- |
+| `yarn.nodemanager.disk-health-checker.enable` | true, false | Enable or disable the disk health checker service |
+| `yarn.nodemanager.disk-health-checker.interval-ms` | Positive integer | The interval, in milliseconds, at which the disk checker should run; the default value is 2 minutes |
+| `yarn.nodemanager.disk-health-checker.min-healthy-disks` | Float between 0-1 | The minimum fraction of disks that must pass the check for the NodeManager to mark the node as healthy; the default is 0.25 |
+| `yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage` | Float between 0-100 | The maximum percentage of disk space that may be utilized before a disk is marked as unhealthy by the disk checker service. This check is run for every disk used by the NodeManager. The default value is 100 i.e. the entire disk can be used. |
+| `yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb` | Integer | The minimum amount of free space that must be available on the disk for the disk checker service to mark the disk as healthy. This check is run for every disk used by the NodeManager. The default value is 0 i.e. the entire disk can be used. |
+
+
+###External Health Script
+
+  Users may specify their own health checker script that will be invoked by the health checker service. Users may specify a timeout as well as options to be passed to the script. If the script exits with a non-zero exit code, times out or results in an exception being thrown, the node is marked as unhealthy. Please note that if the script cannot be executed due to permissions or an incorrect path, etc, then it counts as a failure and the node will be reported as unhealthy. Please note that speifying a health check script is not mandatory. If no script is specified, only the disk checker status will be used to determine the health of the node. The following configuration parameters can be used to set the health script:
+
+| Configuration Name | Allowed Values | Description |
+|:---- |:---- |:---- |
+| `yarn.nodemanager.health-checker.interval-ms` | Postive integer | The interval, in milliseconds, at which health checker service runs; the default value is 10 minutes. |
+| `yarn.nodemanager.health-checker.script.timeout-ms` | Postive integer | The timeout for the health script that's executed; the default value is 20 minutes. |
+| `yarn.nodemanager.health-checker.script.path` | String | Absolute path to the health check script to be run. |
+| `yarn.nodemanager.health-checker.script.opts` | String | Arguments to be passed to the script when the script is executed. |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
new file mode 100644
index 0000000..79a428d
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerCgroups.md
@@ -0,0 +1,57 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Using CGroups with YARN
+=======================
+
+* [CGroups Configuration](#CGroups_configuration)
+* [CGroups and Security](#CGroups_and_security)
+
+CGroups is a mechanism for aggregating/partitioning sets of tasks, and all their future children, into hierarchical groups with specialized behaviour. CGroups is a Linux kernel feature and was merged into kernel version 2.6.24. From a YARN perspective, this allows containers to be limited in their resource usage. A good example of this is CPU usage. Without CGroups, it becomes hard to limit container CPU usage. Currently, CGroups is only used for limiting CPU usage.
+
+CGroups Configuration
+---------------------
+
+This section describes the configuration variables for using CGroups.
+
+The following settings are related to setting up CGroups. These need to be set in *yarn-site.xml*.
+
+|Configuration Name | Description |
+|:---- |:---- |
+| `yarn.nodemanager.container-executor.class` | This should be set to "org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor". CGroups is a Linux kernel feature and is exposed via the LinuxContainerExecutor. |
+| `yarn.nodemanager.linux-container-executor.resources-handler.class` | This should be set to "org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler". Using the LinuxContainerExecutor doesn't force you to use CGroups. If you wish to use CGroups, the resource-handler-class must be set to CGroupsLCEResourceHandler. |
+| `yarn.nodemanager.linux-container-executor.cgroups.hierarchy` | The cgroups hierarchy under which to place YARN proccesses(cannot contain commas). If yarn.nodemanager.linux-container-executor.cgroups.mount is false (that is, if cgroups have been pre-configured), then this cgroups hierarchy must already exist |
+| `yarn.nodemanager.linux-container-executor.cgroups.mount` | Whether the LCE should attempt to mount cgroups if not found - can be true or false. |
+| `yarn.nodemanager.linux-container-executor.cgroups.mount-path` | Where the LCE should attempt to mount cgroups if not found. Common locations include /sys/fs/cgroup and /cgroup; the default location can vary depending on the Linux distribution in use. This path must exist before the NodeManager is launched. Only used when the LCE resources handler is set to the CgroupsLCEResourcesHandler, and yarn.nodemanager.linux-container-executor.cgroups.mount is true. A point to note here is that the container-executor binary will try to mount the path specified + "/" + the subsystem. In our case, since we are trying to limit CPU the binary tries to mount the path specified + "/cpu" and that's the path it expects to exist. |
+| `yarn.nodemanager.linux-container-executor.group` | The Unix group of the NodeManager. It should match the setting in "container-executor.cfg". This configuration is required for validating the secure access of the container-executor binary. |
+
+The following settings are related to limiting resource usage of YARN containers:
+
+|Configuration Name | Description |
+|:---- |:---- |
+| `yarn.nodemanager.resource.percentage-physical-cpu-limit` | This setting lets you limit the cpu usage of all YARN containers. It sets a hard upper limit on the cumulative CPU usage of the containers. For example, if set to 60, the combined CPU usage of all YARN containers will not exceed 60%. |
+| `yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage` | CGroups allows cpu usage limits to be hard or soft. When this setting is true, containers cannot use more CPU usage than allocated even if spare CPU is available. This ensures that containers can only use CPU that they were allocated. When set to false, containers can use spare CPU if available. It should be noted that irrespective of whether set to true or false, at no time can the combined CPU usage of all containers exceed the value specified in "yarn.nodemanager.resource.percentage-physical-cpu-limit". |
+
+CGroups and security
+--------------------
+
+CGroups itself has no requirements related to security. However, the LinuxContainerExecutor does have some requirements. If running in non-secure mode, by default, the LCE runs all jobs as user "nobody". This user can be changed by setting "yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user" to the desired user. However, it can also be configured to run jobs as the user submitting the job. In that case "yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users" should be set to false.
+
+| yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user | yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users | User running jobs |
+|:---- |:---- |:---- |
+| (default) | (default) | nobody |
+| yarn | (default) | yarn |
+| yarn | false | (User submitting the job) |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md
new file mode 100644
index 0000000..acafd28
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRest.md
@@ -0,0 +1,543 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+NodeManager REST API's
+=======================
+
+* [Overview](#Overview)
+* [NodeManager Information API](#NodeManager_Information_API)
+* [Applications API](#Applications_API)
+* [Application API](#Application_API)
+* [Containers API](#Containers_API)
+* [Container API](#Container_API)
+
+Overview
+--------
+
+The NodeManager REST API's allow the user to get status on the node and information about applications and containers running on that node.
+
+NodeManager Information API
+---------------------------
+
+The node information resource provides overall information about that particular node.
+
+### URI
+
+Both of the following URI's give you the cluster information.
+
+      * http://<nm http address:port>/ws/v1/node
+      * http://<nm http address:port>/ws/v1/node/info
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *nodeInfo* object
+
+| Properties | Data Type | Description |
+|:---- |:---- |:---- |
+| id | long | The NodeManager id |
+| nodeHostName | string | The host name of the NodeManager |
+| totalPmemAllocatedContainersMB | long | The amount of physical memory allocated for use by containers in MB |
+| totalVmemAllocatedContainersMB | long | The amount of virtual memory allocated for use by containers in MB |
+| totalVCoresAllocatedContainers | long | The number of virtual cores allocated for use by containers |
+| lastNodeUpdateTime | long | The last timestamp at which the health report was received (in ms since epoch) |
+| healthReport | string | The diagnostic health report of the node |
+| nodeHealthy | boolean | true/false indicator of if the node is healthy |
+| nodeManagerVersion | string | Version of the NodeManager |
+| nodeManagerBuildVersion | string | NodeManager build string with build version, user, and checksum |
+| nodeManagerVersionBuiltOn | string | Timestamp when NodeManager was built(in ms since epoch) |
+| hadoopVersion | string | Version of hadoop common |
+| hadoopBuildVersion | string | Hadoop common build string with build version, user, and checksum |
+| hadoopVersionBuiltOn | string | Timestamp when hadoop common was built(in ms since epoch) |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/info
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "nodeInfo" : {
+      "hadoopVersionBuiltOn" : "Mon Jan  9 14:58:42 UTC 2012",
+      "nodeManagerBuildVersion" : "0.23.1-SNAPSHOT from 1228355 by user1 source checksum 20647f76c36430e888cc7204826a445c",
+      "lastNodeUpdateTime" : 1326222266126,
+      "totalVmemAllocatedContainersMB" : 17203,
+      "totalVCoresAllocatedContainers" : 8,
+      "nodeHealthy" : true,
+      "healthReport" : "",
+      "totalPmemAllocatedContainersMB" : 8192,
+      "nodeManagerVersionBuiltOn" : "Mon Jan  9 15:01:59 UTC 2012",
+      "nodeManagerVersion" : "0.23.1-SNAPSHOT",
+      "id" : "host.domain.com:8041",
+      "hadoopBuildVersion" : "0.23.1-SNAPSHOT from 1228292 by user1 source checksum 3eba233f2248a089e9b28841a784dd00",
+      "nodeHostName" : "host.domain.com",
+      "hadoopVersion" : "0.23.1-SNAPSHOT"
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      Accept: application/xml
+      GET http://<nm http address:port>/ws/v1/node/info
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 983
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<nodeInfo>
+  <healthReport/>
+  <totalVmemAllocatedContainersMB>17203</totalVmemAllocatedContainersMB>
+  <totalPmemAllocatedContainersMB>8192</totalPmemAllocatedContainersMB>
+  <totalVCoresAllocatedContainers>8</totalVCoresAllocatedContainers>
+  <lastNodeUpdateTime>1326222386134</lastNodeUpdateTime>
+  <nodeHealthy>true</nodeHealthy>
+  <nodeManagerVersion>0.23.1-SNAPSHOT</nodeManagerVersion>
+  <nodeManagerBuildVersion>0.23.1-SNAPSHOT from 1228355 by user1 source checksum 20647f76c36430e888cc7204826a445c</nodeManagerBuildVersion>
+  <nodeManagerVersionBuiltOn>Mon Jan  9 15:01:59 UTC 2012</nodeManagerVersionBuiltOn>
+  <hadoopVersion>0.23.1-SNAPSHOT</hadoopVersion>
+  <hadoopBuildVersion>0.23.1-SNAPSHOT from 1228292 by user1 source checksum 3eba233f2248a089e9b28841a784dd00</hadoopBuildVersion>
+  <hadoopVersionBuiltOn>Mon Jan  9 14:58:42 UTC 2012</hadoopVersionBuiltOn>
+  <id>host.domain.com:8041</id>
+  <nodeHostName>host.domain.com</nodeHostName>
+</nodeInfo>
+```
+
+Applications API
+----------------
+
+With the Applications API, you can obtain a collection of resources, each of which represents an application. When you run a GET operation on this resource, you obtain a collection of Application Objects. See also [Application API](#Application_API) for syntax of the application object.
+
+### URI
+
+      * http://<nm http address:port>/ws/v1/node/apps
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+Multiple paramters can be specified.
+
+      * state - application state 
+      * user - user name
+
+### Elements of the *apps* (Applications) object
+
+When you make a request for the list of applications, the information will be returned as a collection of app objects. See also [Application API](#Application_API) for syntax of the app object.
+
+| Properties | Data Type | Description |
+|:---- |:---- |:---- |
+| app | array of app objects(JSON)/zero or more app objects(XML) | A collection of application objects |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/apps
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "apps" : {
+      "app" : [
+         {
+            "containerids" : [
+               "container_1326121700862_0003_01_000001",
+               "container_1326121700862_0003_01_000002"
+            ],
+            "user" : "user1",
+            "id" : "application_1326121700862_0003",
+            "state" : "RUNNING"
+         },
+         {
+            "user" : "user1",
+            "id" : "application_1326121700862_0002",
+            "state" : "FINISHED"
+         }
+      ]
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/apps
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 400
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<apps>
+  <app>
+    <id>application_1326121700862_0002</id>
+    <state>FINISHED</state>
+    <user>user1</user>
+  </app>
+  <app>
+    <id>application_1326121700862_0003</id>
+    <state>RUNNING</state>
+    <user>user1</user>
+    <containerids>container_1326121700862_0003_01_000002</containerids>
+    <containerids>container_1326121700862_0003_01_000001</containerids>
+  </app>
+</apps>
+```
+
+Application API
+---------------
+
+An application resource contains information about a particular application that was run or is running on this NodeManager.
+
+### URI
+
+Use the following URI to obtain an app Object, for a application identified by the appid value.
+
+      * http://<nm http address:port>/ws/v1/node/apps/{appid}
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *app* (Application) object
+
+| Properties | Data Type | Description |
+|:---- |:---- |:---- |
+| id | string | The application id |
+| user | string | The user who started the application |
+| state | string | The state of the application - valid states are: NEW, INITING, RUNNING, FINISHING\_CONTAINERS\_WAIT, APPLICATION\_RESOURCES\_CLEANINGUP, FINISHED |
+| containerids | array of containerids(JSON)/zero or more containerids(XML) | The list of containerids currently being used by the application on this node. If not present then no containers are currently running for this application. |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/apps/application_1326121700862_0005
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "app" : {
+      "containerids" : [
+         "container_1326121700862_0005_01_000003",
+         "container_1326121700862_0005_01_000001"
+      ],
+      "user" : "user1",
+      "id" : "application_1326121700862_0005",
+      "state" : "RUNNING"
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/apps/application_1326121700862_0005
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 281 
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<app>
+  <id>application_1326121700862_0005</id>
+  <state>RUNNING</state>
+  <user>user1</user>
+  <containerids>container_1326121700862_0005_01_000003</containerids>
+  <containerids>container_1326121700862_0005_01_000001</containerids>
+</app>
+```
+
+Containers API
+--------------
+
+With the containers API, you can obtain a collection of resources, each of which represents a container. When you run a GET operation on this resource, you obtain a collection of Container Objects. See also [Container API](#Container_API) for syntax of the container object.
+
+### URI
+
+      * http://<nm http address:port>/ws/v1/node/containers
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *containers* object
+
+When you make a request for the list of containers, the information will be returned as collection of container objects. See also [Container API](#Container_API) for syntax of the container object.
+
+| Properties | Data Type | Description |
+|:---- |:---- |:---- |
+| containers | array of container objects(JSON)/zero or more container objects(XML) | A collection of container objects |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/containers
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "containers" : {
+      "container" : [
+         {
+            "nodeId" : "host.domain.com:8041",
+            "totalMemoryNeededMB" : 2048,
+            "totalVCoresNeeded" : 1,
+            "state" : "RUNNING",
+            "diagnostics" : "",
+            "containerLogsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000001/user1",
+            "user" : "user1",
+            "id" : "container_1326121700862_0006_01_000001",
+            "exitCode" : -1000
+         },
+         {
+            "nodeId" : "host.domain.com:8041",
+            "totalMemoryNeededMB" : 2048,
+            "totalVCoresNeeded" : 2,
+            "state" : "RUNNING",
+            "diagnostics" : "",
+            "containerLogsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000003/user1",
+            "user" : "user1",
+            "id" : "container_1326121700862_0006_01_000003",
+            "exitCode" : -1000
+         }
+      ]
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/containers
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 988
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<containers>
+  <container>
+    <id>container_1326121700862_0006_01_000001</id>
+    <state>RUNNING</state>
+    <exitCode>-1000</exitCode>
+    <diagnostics/>
+    <user>user1</user>
+    <totalMemoryNeededMB>2048</totalMemoryNeededMB>
+    <totalVCoresNeeded>1</totalVCoresNeeded>
+    <containerLogsLink>http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000001/user1</containerLogsLink>
+    <nodeId>host.domain.com:8041</nodeId>
+  </container>
+  <container>
+    <id>container_1326121700862_0006_01_000003</id>
+    <state>DONE</state>
+    <exitCode>0</exitCode>
+    <diagnostics>Container killed by the ApplicationMaster.</diagnostics>
+    <user>user1</user>
+    <totalMemoryNeededMB>2048</totalMemoryNeededMB>
+    <totalVCoresNeeded>2</totalVCoresNeeded>
+    <containerLogsLink>http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000003/user1</containerLogsLink>
+    <nodeId>host.domain.com:8041</nodeId>
+  </container>
+</containers>
+```
+
+Container API
+-------------
+
+A container resource contains information about a particular container that is running on this NodeManager.
+
+### URI
+
+Use the following URI to obtain a Container Object, from a container identified by the containerid value.
+
+      * http://<nm http address:port>/ws/v1/node/containers/{containerid}
+
+### HTTP Operations Supported
+
+      * GET
+
+### Query Parameters Supported
+
+      None
+
+### Elements of the *container* object
+
+| Properties | Data Type | Description |
+|:---- |:---- |:---- |
+| id | string | The container id |
+| state | string | State of the container - valid states are: NEW, LOCALIZING, LOCALIZATION\_FAILED, LOCALIZED, RUNNING, EXITED\_WITH\_SUCCESS, EXITED\_WITH\_FAILURE, KILLING, CONTAINER\_CLEANEDUP\_AFTER\_KILL, CONTAINER\_RESOURCES\_CLEANINGUP, DONE |
+| nodeId | string | The id of the node the container is on |
+| containerLogsLink | string | The http link to the container logs |
+| user | string | The user name of the user which started the container |
+| exitCode | int | Exit code of the container |
+| diagnostics | string | A diagnostic message for failed containers |
+| totalMemoryNeededMB | long | Total amout of memory needed by the container (in MB) |
+| totalVCoresNeeded | long | Total number of virtual cores needed by the container |
+
+### Response Examples
+
+**JSON response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/nodes/containers/container_1326121700862_0007_01_000001
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "container" : {
+      "nodeId" : "host.domain.com:8041",
+      "totalMemoryNeededMB" : 2048,
+      "totalVCoresNeeded" : 1,
+      "state" : "RUNNING",
+      "diagnostics" : "",
+      "containerLogsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326121700862_0007_01_000001/user1",
+      "user" : "user1",
+      "id" : "container_1326121700862_0007_01_000001",
+      "exitCode" : -1000
+   }
+}
+```
+
+**XML response**
+
+HTTP Request:
+
+      GET http://<nm http address:port>/ws/v1/node/containers/container_1326121700862_0007_01_000001
+      Accept: application/xml
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/xml
+      Content-Length: 491 
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```xml
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<container>
+  <id>container_1326121700862_0007_01_000001</id>
+  <state>RUNNING</state>
+  <exitCode>-1000</exitCode>
+  <diagnostics/>
+  <user>user1</user>
+  <totalMemoryNeededMB>2048</totalMemoryNeededMB>
+  <totalVCoresNeeded>1</totalVCoresNeeded>
+  <containerLogsLink>http://host.domain.com:8042/node/containerlogs/container_1326121700862_0007_01_000001/user1</containerLogsLink>
+  <nodeId>host.domain.com:8041</nodeId>
+</container>
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRestart.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRestart.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRestart.md
new file mode 100644
index 0000000..be7d75b
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeManagerRestart.md
@@ -0,0 +1,53 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+NodeManager Restart
+===================
+
+* [Introduction](#Introduction)
+* [Enabling NM Restart](#Enabling_NM_Restart)
+
+Introduction
+------------
+
+This document gives an overview of NodeManager (NM) restart, a feature that enables the NodeManager to be restarted without losing the active containers running on the node. At a high level, the NM stores any necessary state to a local state-store as it processes container-management requests. When the NM restarts, it recovers by first loading state for various subsystems and then letting those subsystems perform recovery using the loaded state.
+
+Enabling NM Restart
+-------------------
+
+Step 1. To enable NM Restart functionality, set the following property in **conf/yarn-site.xml** to *true*.
+
+| Property | Value |
+|:---- |:---- |
+| `yarn.nodemanager.recovery.enabled` | `true`, (default value is set to false) |
+
+Step 2.  Configure a path to the local file-system directory where the NodeManager can save its run state.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.nodemanager.recovery.dir` | The local filesystem directory in which the node manager will store state when recovery is enabled. The default value is set to `$hadoop.tmp.dir/yarn-nm-recovery`. |
+
+Step 3.  Configure a valid RPC address for the NodeManager.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.nodemanager.address` | Ephemeral ports (port 0, which is default) cannot be used for the NodeManager's RPC server specified via yarn.nodemanager.address as it can make NM use different ports before and after a restart. This will break any previously running clients that were communicating with the NM before restart. Explicitly setting yarn.nodemanager.address to an address with specific port number (for e.g 0.0.0.0:45454) is a precondition for enabling NM restart. |
+
+Step 4.  Auxiliary services.
+
+  * NodeManagers in a YARN cluster can be configured to run auxiliary services. For a completely functional NM restart, YARN relies on any auxiliary service configured to also support recovery. This usually includes (1) avoiding usage of ephemeral ports so that previously running clients (in this case, usually containers) are not disrupted after restart and (2) having the auxiliary service itself support recoverability by reloading any previous state when NodeManager restarts and reinitializes the auxiliary service.
+
+  * A simple example for the above is the auxiliary service 'ShuffleHandler' for MapReduce (MR). ShuffleHandler respects the above two requirements already, so users/admins don't have do anything for it to support NM restart: (1) The configuration property **mapreduce.shuffle.port** controls which port the ShuffleHandler on a NodeManager host binds to, and it defaults to a non-ephemeral port. (2) The ShuffleHandler service also already supports recovery of previous state after NM restarts.
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerHA.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerHA.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerHA.md
new file mode 100644
index 0000000..491b885
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerHA.md
@@ -0,0 +1,140 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+ResourceManager High Availability
+=================================
+
+* [Introduction](#Introduction)
+* [Architecture](#Architecture)
+    * [RM Failover](#RM_Failover)
+    * [Recovering prevous active-RM's state](#Recovering_prevous_active-RMs_state)
+* [Deployment](#Deployment)
+    * [Configurations](#Configurations)
+    * [Admin commands](#Admin_commands)
+    * [ResourceManager Web UI services](#ResourceManager_Web_UI_services)
+    * [Web Services](#Web_Services)
+
+Introduction
+------------
+
+This guide provides an overview of High Availability of YARN's ResourceManager, and details how to configure and use this feature. The ResourceManager (RM) is responsible for tracking the resources in a cluster, and scheduling applications (e.g., MapReduce jobs). Prior to Hadoop 2.4, the ResourceManager is the single point of failure in a YARN cluster. The High Availability feature adds redundancy in the form of an Active/Standby ResourceManager pair to remove this otherwise single point of failure.
+
+Architecture
+------------
+
+![Overview of ResourceManager High Availability](images/rm-ha-overview.png)
+
+### RM Failover
+
+ResourceManager HA is realized through an Active/Standby architecture - at any point of time, one of the RMs is Active, and one or more RMs are in Standby mode waiting to take over should anything happen to the Active. The trigger to transition-to-active comes from either the admin (through CLI) or through the integrated failover-controller when automatic-failover is enabled.
+
+#### Manual transitions and failover
+
+When automatic failover is not enabled, admins have to manually transition one of the RMs to Active. To failover from one RM to the other, they are expected to first transition the Active-RM to Standby and transition a Standby-RM to Active. All this can be done using the "`yarn rmadmin`" CLI.
+
+#### Automatic failover
+
+The RMs have an option to embed the Zookeeper-based ActiveStandbyElector to decide which RM should be the Active. When the Active goes down or becomes unresponsive, another RM is automatically elected to be the Active which then takes over. Note that, there is no need to run a separate ZKFC daemon as is the case for HDFS because ActiveStandbyElector embedded in RMs acts as a failure detector and a leader elector instead of a separate ZKFC deamon.
+
+#### Client, ApplicationMaster and NodeManager on RM failover
+
+When there are multiple RMs, the configuration (yarn-site.xml) used by clients and nodes is expected to list all the RMs. Clients, ApplicationMasters (AMs) and NodeManagers (NMs) try connecting to the RMs in a round-robin fashion until they hit the Active RM. If the Active goes down, they resume the round-robin polling until they hit the "new" Active. This default retry logic is implemented as `org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider`. You can override the logic by implementing `org.apache.hadoop.yarn.client.RMFailoverProxyProvider` and setting the value of `yarn.client.failover-proxy-provider` to the class name.
+
+### Recovering prevous active-RM's state
+
+With the [ResourceManger Restart](./ResourceManagerRestart.html) enabled, the RM being promoted to an active state loads the RM internal state and continues to operate from where the previous active left off as much as possible depending on the RM restart feature. A new attempt is spawned for each managed application previously submitted to the RM. Applications can checkpoint periodically to avoid losing any work. The state-store must be visible from the both of Active/Standby RMs. Currently, there are two RMStateStore implementations for persistence - FileSystemRMStateStore and ZKRMStateStore. The `ZKRMStateStore` implicitly allows write access to a single RM at any point in time, and hence is the recommended store to use in an HA cluster. When using the ZKRMStateStore, there is no need for a separate fencing mechanism to address a potential split-brain situation where multiple RMs can potentially assume the Active role.
+
+Deployment
+----------
+
+### Configurations
+
+Most of the failover functionality is tunable using various configuration properties. Following is a list of required/important ones. yarn-default.xml carries a full-list of knobs. See [yarn-default.xml](../hadoop-yarn-common/yarn-default.xml) for more information including default values. See the document for [ResourceManger Restart](./ResourceManagerRestart.html) also for instructions on setting up the state-store.
+
+| Configuration Properties | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.zk-address` | Address of the ZK-quorum. Used both for the state-store and embedded leader-election. |
+| `yarn.resourcemanager.ha.enabled` | Enable RM HA. |
+| `yarn.resourcemanager.ha.rm-ids` | List of logical IDs for the RMs. e.g., "rm1,rm2". |
+| `yarn.resourcemanager.hostname.*rm-id*` | For each *rm-id*, specify the hostname the RM corresponds to. Alternately, one could set each of the RM's service addresses. |
+| `yarn.resourcemanager.ha.id` | Identifies the RM in the ensemble. This is optional; however, if set, admins have to ensure that all the RMs have their own IDs in the config. |
+| `yarn.resourcemanager.ha.automatic-failover.enabled` | Enable automatic failover; By default, it is enabled only when HA is enabled. |
+| `yarn.resourcemanager.ha.automatic-failover.embedded` | Use embedded leader-elector to pick the Active RM, when automatic failover is enabled. By default, it is enabled only when HA is enabled. |
+| `yarn.resourcemanager.cluster-id` | Identifies the cluster. Used by the elector to ensure an RM doesn't take over as Active for another cluster. |
+| `yarn.client.failover-proxy-provider` | The class to be used by Clients, AMs and NMs to failover to the Active RM. |
+| `yarn.client.failover-max-attempts` | The max number of times FailoverProxyProvider should attempt failover. |
+| `yarn.client.failover-sleep-base-ms` | The sleep base (in milliseconds) to be used for calculating the exponential delay between failovers. |
+| `yarn.client.failover-sleep-max-ms` | The maximum sleep time (in milliseconds) between failovers. |
+| `yarn.client.failover-retries` | The number of retries per attempt to connect to a ResourceManager. |
+| `yarn.client.failover-retries-on-socket-timeouts` | The number of retries per attempt to connect to a ResourceManager on socket timeouts. |
+
+#### Sample configurations
+
+Here is the sample of minimal setup for RM failover.
+
+```xml
+<property>
+  <name>yarn.resourcemanager.ha.enabled</name>
+  <value>true</value>
+</property>
+<property>
+  <name>yarn.resourcemanager.cluster-id</name>
+  <value>cluster1</value>
+</property>
+<property>
+  <name>yarn.resourcemanager.ha.rm-ids</name>
+  <value>rm1,rm2</value>
+</property>
+<property>
+  <name>yarn.resourcemanager.hostname.rm1</name>
+  <value>master1</value>
+</property>
+<property>
+  <name>yarn.resourcemanager.hostname.rm2</name>
+  <value>master2</value>
+</property>
+<property>
+  <name>yarn.resourcemanager.zk-address</name>
+  <value>zk1:2181,zk2:2181,zk3:2181</value>
+</property>
+```
+
+### Admin commands
+
+`yarn rmadmin` has a few HA-specific command options to check the health/state of an RM, and transition to Active/Standby. Commands for HA take service id of RM set by `yarn.resourcemanager.ha.rm-ids` as argument.
+
+     $ yarn rmadmin -getServiceState rm1
+     active
+     
+     $ yarn rmadmin -getServiceState rm2
+     standby
+
+If automatic failover is enabled, you can not use manual transition command. Though you can override this by --forcemanual flag, you need caution.
+
+     $ yarn rmadmin -transitionToStandby rm1
+     Automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@1d8299fd
+     Refusing to manually manage HA state, since it may cause
+     a split-brain scenario or other incorrect state.
+     If you are very sure you know what you are doing, please
+     specify the forcemanual flag.
+
+See [YarnCommands](./YarnCommands.html) for more details.
+
+### ResourceManager Web UI services
+
+Assuming a standby RM is up and running, the Standby automatically redirects all web requests to the Active, except for the "About" page.
+
+### Web Services
+
+Assuming a standby RM is up and running, RM web-services described at [ResourceManager REST APIs](./ResourceManagerRest.html) when invoked on a standby RM are automatically redirected to the Active RM.


[03/43] hadoop git commit: YARN-3262. Surface application outstanding resource requests table in RM web UI. (Jian He via wangda)

Posted by zj...@apache.org.
YARN-3262. Surface application outstanding resource requests table in RM web UI. (Jian He via wangda)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/edcecedc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/edcecedc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/edcecedc

Branch: refs/heads/YARN-2928
Commit: edcecedc1c39d54db0f86a1325b4db26c38d2d4d
Parents: cf51ff2
Author: Wangda Tan <wa...@apache.org>
Authored: Fri Feb 27 16:13:32 2015 -0800
Committer: Wangda Tan <wa...@apache.org>
Committed: Fri Feb 27 16:13:32 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                 |  3 ++
 .../records/impl/pb/ResourceRequestPBImpl.java  |  4 +-
 .../scheduler/AbstractYarnScheduler.java        |  9 ++++
 .../scheduler/AppSchedulingInfo.java            | 33 +++++++-------
 .../scheduler/SchedulerApplicationAttempt.java  |  6 ++-
 .../server/resourcemanager/webapp/AppBlock.java | 46 +++++++++++++++++++-
 .../server/resourcemanager/webapp/AppPage.java  |  4 ++
 .../resourcemanager/webapp/AppsBlock.java       |  5 ++-
 .../webapp/FairSchedulerAppsBlock.java          |  5 ++-
 .../resourcemanager/webapp/RMWebServices.java   |  6 +--
 .../resourcemanager/webapp/dao/AppInfo.java     | 17 +++++++-
 .../webapp/TestRMWebAppFairScheduler.java       | 10 ++++-
 .../webapp/TestRMWebServicesApps.java           |  3 +-
 13 files changed, 118 insertions(+), 33 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index 38dd9fa..e7af84b 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -336,6 +336,9 @@ Release 2.7.0 - UNRELEASED
     YARN-2820. Retry in FileSystemRMStateStore when FS's operations fail 
     due to IOException. (Zhihai Xu via ozawa)
 
+    YARN-3262. Surface application outstanding resource requests table 
+    in RM web UI. (Jian He via wangda)
+
   OPTIMIZATIONS
 
     YARN-2990. FairScheduler's delay-scheduling always waits for node-local and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
index 0c8491f..27fb5ae 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ResourceRequestPBImpl.java
@@ -140,13 +140,13 @@ public class ResourceRequestPBImpl extends  ResourceRequest {
     this.capability = capability;
   }
   @Override
-  public int getNumContainers() {
+  public synchronized int getNumContainers() {
     ResourceRequestProtoOrBuilder p = viaProto ? proto : builder;
     return (p.getNumContainers());
   }
 
   @Override
-  public void setNumContainers(int numContainers) {
+  public synchronized void setNumContainers(int numContainers) {
     maybeInitBuilder();
     builder.setNumContainers((numContainers));
   }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
index 04b3452..968a767 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AbstractYarnScheduler.java
@@ -658,4 +658,13 @@ public abstract class AbstractYarnScheduler
       maxAllocWriteLock.unlock();
     }
   }
+
+  public List<ResourceRequest> getPendingResourceRequestsForAttempt(
+      ApplicationAttemptId attemptId) {
+    SchedulerApplicationAttempt attempt = getApplicationAttempt(attemptId);
+    if (attempt != null) {
+      return attempt.getAppSchedulingInfo().getAllResourceRequests();
+    }
+    return null;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
index a9a459f..97dc231 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/AppSchedulingInfo.java
@@ -20,12 +20,14 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler;
 
 import java.util.ArrayList;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
 import java.util.Map;
 import java.util.Set;
 import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.atomic.AtomicLong;
 
 import org.apache.commons.logging.Log;
@@ -64,7 +66,7 @@ public class AppSchedulingInfo {
   final Set<Priority> priorities = new TreeSet<Priority>(
       new org.apache.hadoop.yarn.server.resourcemanager.resource.Priority.Comparator());
   final Map<Priority, Map<String, ResourceRequest>> requests =
-    new HashMap<Priority, Map<String, ResourceRequest>>();
+    new ConcurrentHashMap<Priority, Map<String, ResourceRequest>>();
   private Set<String> blacklist = new HashSet<String>();
 
   //private final ApplicationStore store;
@@ -159,7 +161,7 @@ public class AppSchedulingInfo {
       Map<String, ResourceRequest> asks = this.requests.get(priority);
 
       if (asks == null) {
-        asks = new HashMap<String, ResourceRequest>();
+        asks = new ConcurrentHashMap<String, ResourceRequest>();
         this.requests.put(priority, asks);
         this.priorities.add(priority);
       }
@@ -221,7 +223,7 @@ public class AppSchedulingInfo {
     return requests.get(priority);
   }
 
-  synchronized public List<ResourceRequest> getAllResourceRequests() {
+  public List<ResourceRequest> getAllResourceRequests() {
     List<ResourceRequest> ret = new ArrayList<ResourceRequest>();
     for (Map<String, ResourceRequest> r : requests.values()) {
       ret.addAll(r.values());
@@ -300,17 +302,11 @@ public class AppSchedulingInfo {
       Priority priority, ResourceRequest nodeLocalRequest, Container container,
       List<ResourceRequest> resourceRequests) {
     // Update future requirements
-    nodeLocalRequest.setNumContainers(nodeLocalRequest.getNumContainers() - 1);
-    if (nodeLocalRequest.getNumContainers() == 0) {
-      this.requests.get(priority).remove(node.getNodeName());
-    }
+    decResourceRequest(node.getNodeName(), priority, nodeLocalRequest);
 
     ResourceRequest rackLocalRequest = requests.get(priority).get(
         node.getRackName());
-    rackLocalRequest.setNumContainers(rackLocalRequest.getNumContainers() - 1);
-    if (rackLocalRequest.getNumContainers() == 0) {
-      this.requests.get(priority).remove(node.getRackName());
-    }
+    decResourceRequest(node.getRackName(), priority, rackLocalRequest);
 
     ResourceRequest offRackRequest = requests.get(priority).get(
         ResourceRequest.ANY);
@@ -322,6 +318,14 @@ public class AppSchedulingInfo {
     resourceRequests.add(cloneResourceRequest(offRackRequest));
   }
 
+  private void decResourceRequest(String resourceName, Priority priority,
+      ResourceRequest request) {
+    request.setNumContainers(request.getNumContainers() - 1);
+    if (request.getNumContainers() == 0) {
+      requests.get(priority).remove(resourceName);
+    }
+  }
+
   /**
    * The {@link ResourceScheduler} is allocating data-local resources to the
    * application.
@@ -333,11 +337,8 @@ public class AppSchedulingInfo {
       Priority priority, ResourceRequest rackLocalRequest, Container container,
       List<ResourceRequest> resourceRequests) {
     // Update future requirements
-    rackLocalRequest.setNumContainers(rackLocalRequest.getNumContainers() - 1);
-    if (rackLocalRequest.getNumContainers() == 0) {
-      this.requests.get(priority).remove(node.getRackName());
-    }
-
+    decResourceRequest(node.getRackName(), priority, rackLocalRequest);
+    
     ResourceRequest offRackRequest = requests.get(priority).get(
         ResourceRequest.ANY);
     decrementOutstanding(offRackRequest);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
index d5b6ce6..532df05 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerApplicationAttempt.java
@@ -153,7 +153,11 @@ public class SchedulerApplicationAttempt {
   public synchronized Collection<RMContainer> getLiveContainers() {
     return new ArrayList<RMContainer>(liveContainers.values());
   }
-  
+
+  public AppSchedulingInfo getAppSchedulingInfo() {
+    return this.appSchedulingInfo;
+  }
+
   /**
    * Is this application pending?
    * @return true if it is else false.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
index c2b376e..62ad8df 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppBlock.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
 import org.apache.hadoop.yarn.api.records.QueueACL;
 import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
@@ -50,6 +51,7 @@ import org.apache.hadoop.yarn.util.resource.Resources;
 import org.apache.hadoop.yarn.webapp.hamlet.Hamlet;
 import org.apache.hadoop.yarn.webapp.hamlet.Hamlet.DIV;
 import org.apache.hadoop.yarn.webapp.hamlet.Hamlet.TABLE;
+import org.apache.hadoop.yarn.webapp.hamlet.Hamlet.TBODY;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
 import org.apache.hadoop.yarn.webapp.view.HtmlBlock;
 import org.apache.hadoop.yarn.webapp.view.InfoBlock;
@@ -90,7 +92,8 @@ public class AppBlock extends HtmlBlock {
       puts("Application not found: "+ aid);
       return;
     }
-    AppInfo app = new AppInfo(rmApp, true, WebAppUtils.getHttpSchemePrefix(conf));
+    AppInfo app =
+        new AppInfo(rm, rmApp, true, WebAppUtils.getHttpSchemePrefix(conf));
 
     // Check for the authorization.
     String remoteUser = request().getRemoteUser();
@@ -134,7 +137,7 @@ public class AppBlock extends HtmlBlock {
         ._("Application Type:", app.getApplicationType())
         ._("Application Tags:", app.getApplicationTags())
         ._("YarnApplicationState:", clarifyAppState(app.getState()))
-        ._("FinalStatus reported by AM:",
+        ._("FinalStatus Reported by AM:",
           clairfyAppFinalStatus(app.getFinalStatus()))
         ._("Started:", Times.format(app.getStartTime()))
         ._("Elapsed:",
@@ -200,6 +203,45 @@ public class AppBlock extends HtmlBlock {
 
     table._();
     div._();
+
+    createResourceRequestsTable(html, app);
+  }
+
+  private void createResourceRequestsTable(Block html, AppInfo app) {
+    TBODY<TABLE<Hamlet>> tbody =
+        html.table("#ResourceRequests").thead().tr()
+          .th(".priority", "Priority")
+          .th(".resourceName", "ResourceName")
+          .th(".totalResource", "Capability")
+          .th(".numContainers", "NumContainers")
+          .th(".relaxLocality", "RelaxLocality")
+          .th(".nodeLabelExpression", "NodeLabelExpression")._()._().tbody();
+
+    Resource totalResource = Resource.newInstance(0, 0);
+    if (app.getResourceRequests() != null) {
+      for (ResourceRequest request : app.getResourceRequests()) {
+        if (request.getNumContainers() == 0) {
+          continue;
+        }
+
+        tbody.tr()
+          .td(String.valueOf(request.getPriority()))
+          .td(request.getResourceName())
+          .td(String.valueOf(request.getCapability()))
+          .td(String.valueOf(request.getNumContainers()))
+          .td(String.valueOf(request.getRelaxLocality()))
+          .td(request.getNodeLabelExpression() == null ? "N/A" : request
+              .getNodeLabelExpression())._();
+        if (request.getResourceName().equals(ResourceRequest.ANY)) {
+          Resources.addTo(totalResource,
+            Resources.multiply(request.getCapability(),
+              request.getNumContainers()));
+        }
+      }
+    }
+    html.div().$class("totalResourceRequests")
+      .h3("Total Outstanding Resource Requests: " + totalResource)._();
+    tbody._()._();
   }
 
   private String clarifyAppState(YarnApplicationState state) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppPage.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppPage.java
index a55c62f..8993324 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppPage.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppPage.java
@@ -18,12 +18,16 @@
 
 package org.apache.hadoop.yarn.server.resourcemanager.webapp;
 
+import static org.apache.hadoop.yarn.webapp.view.JQueryUI.DATATABLES_ID;
+
 import org.apache.hadoop.yarn.webapp.SubView;
 
 public class AppPage extends RmView {
 
   @Override protected void preHead(Page.HTML<_> html) {
     commonPreHead(html);
+    set(DATATABLES_ID, "ResourceRequests");
+    setTableStyles(html, "ResourceRequests");
   }
 
   @Override protected Class<? extends SubView> content() {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppsBlock.java
index 054a1a7..935be61 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/AppsBlock.java
@@ -46,12 +46,13 @@ import com.google.inject.Inject;
 class AppsBlock extends HtmlBlock {
   final ConcurrentMap<ApplicationId, RMApp> apps;
   private final Configuration conf;
-
+  final ResourceManager rm;
   @Inject
   AppsBlock(ResourceManager rm, ViewContext ctx, Configuration conf) {
     super(ctx);
     apps = rm.getRMContext().getRMApps();
     this.conf = conf;
+    this.rm = rm;
   }
 
   @Override public void render(Block html) {
@@ -85,7 +86,7 @@ class AppsBlock extends HtmlBlock {
       if (reqAppStates != null && !reqAppStates.contains(app.createApplicationState())) {
         continue;
       }
-      AppInfo appInfo = new AppInfo(app, true, WebAppUtils.getHttpSchemePrefix(conf));
+      AppInfo appInfo = new AppInfo(rm, app, true, WebAppUtils.getHttpSchemePrefix(conf));
       String percent = String.format("%.1f", appInfo.getProgress());
       //AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js
       appsTableData.append("[\"<a href='")

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
index 42ee53c..8cfd246 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/FairSchedulerAppsBlock.java
@@ -56,7 +56,7 @@ public class FairSchedulerAppsBlock extends HtmlBlock {
   final ConcurrentMap<ApplicationId, RMApp> apps;
   final FairSchedulerInfo fsinfo;
   final Configuration conf;
-  
+  final ResourceManager rm;
   @Inject
   public FairSchedulerAppsBlock(ResourceManager rm, ViewContext ctx,
       Configuration conf) {
@@ -73,6 +73,7 @@ public class FairSchedulerAppsBlock extends HtmlBlock {
       }
     }
     this.conf = conf;
+    this.rm = rm;
   }
   
   @Override public void render(Block html) {
@@ -107,7 +108,7 @@ public class FairSchedulerAppsBlock extends HtmlBlock {
       if (reqAppStates != null && !reqAppStates.contains(app.createApplicationState())) {
         continue;
       }
-      AppInfo appInfo = new AppInfo(app, true, WebAppUtils.getHttpSchemePrefix(conf));
+      AppInfo appInfo = new AppInfo(rm, app, true, WebAppUtils.getHttpSchemePrefix(conf));
       String percent = String.format("%.1f", appInfo.getProgress());
       ApplicationAttemptId attemptId = app.getCurrentAppAttempt().getAppAttemptId();
       int fairShare = fsinfo.getAppFairShare(attemptId);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index 1834b6a..f8836d5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
@@ -476,8 +476,8 @@ public class RMWebServices {
         }
       }
 
-      AppInfo app = new AppInfo(rmapp, hasAccess(rmapp, hsr),
-          WebAppUtils.getHttpSchemePrefix(conf));
+      AppInfo app = new AppInfo(rm, rmapp,
+          hasAccess(rmapp, hsr), WebAppUtils.getHttpSchemePrefix(conf));
       allApps.add(app);
     }
     return allApps;
@@ -617,7 +617,7 @@ public class RMWebServices {
     if (app == null) {
       throw new NotFoundException("app with id: " + appId + " not found");
     }
-    return new AppInfo(app, hasAccess(app, hsr), hsr.getScheme() + "://");
+    return new AppInfo(rm, app, hasAccess(app, hsr), hsr.getScheme() + "://");
   }
 
   @GET

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
index 66940cb..79b2248 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.yarn.server.resourcemanager.webapp.dao;
 
+import java.util.List;
+
 import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
 import javax.xml.bind.annotation.XmlRootElement;
@@ -27,11 +29,13 @@ import org.apache.hadoop.yarn.api.records.ApplicationResourceUsageReport;
 import org.apache.hadoop.yarn.api.records.Container;
 import org.apache.hadoop.yarn.api.records.FinalApplicationStatus;
 import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.api.records.ResourceRequest;
 import org.apache.hadoop.yarn.api.records.YarnApplicationState;
 import org.apache.hadoop.yarn.server.resourcemanager.ResourceManager;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler;
 import org.apache.hadoop.yarn.util.ConverterUtils;
 import org.apache.hadoop.yarn.util.Times;
 import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
@@ -88,10 +92,14 @@ public class AppInfo {
   protected int numNonAMContainerPreempted;
   protected int numAMContainerPreempted;
 
+  protected List<ResourceRequest> resourceRequests;
+
   public AppInfo() {
   } // JAXB needs this
 
-  public AppInfo(RMApp app, Boolean hasAccess, String schemePrefix) {
+  @SuppressWarnings({ "rawtypes", "unchecked" })
+  public AppInfo(ResourceManager rm, RMApp app, Boolean hasAccess,
+      String schemePrefix) {
     this.schemePrefix = schemePrefix;
     if (app != null) {
       String trackingUrl = app.getTrackingUrl();
@@ -154,6 +162,9 @@ public class AppInfo {
             allocatedVCores = usedResources.getVirtualCores();
             runningContainers = resourceReport.getNumUsedContainers();
           }
+          resourceRequests =
+              ((AbstractYarnScheduler) rm.getRMContext().getScheduler())
+                .getPendingResourceRequestsForAttempt(attempt.getAppAttemptId());
         }
       }
 
@@ -299,4 +310,8 @@ public class AppInfo {
   public long getVcoreSeconds() {
     return vcoreSeconds;
   }
+
+  public List<ResourceRequest> getResourceRequests() {
+    return this.resourceRequests;
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebAppFairScheduler.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebAppFairScheduler.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebAppFairScheduler.java
index f07cb8d..b850a5e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebAppFairScheduler.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebAppFairScheduler.java
@@ -22,6 +22,7 @@ import com.google.common.collect.Maps;
 import com.google.inject.Binder;
 import com.google.inject.Injector;
 import com.google.inject.Module;
+
 import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
 import org.apache.hadoop.yarn.api.records.ApplicationId;
@@ -35,8 +36,8 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMApp;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppMetrics;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppState;
 import org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttempt;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceScheduler;
-
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSAppAttempt;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairSchedulerConfiguration;
@@ -149,13 +150,18 @@ public class TestRMWebAppFairScheduler {
       i++;
     }
 
-    return new RMContextImpl(null, null, null, null,
+    RMContextImpl rmContext =  new RMContextImpl(null, null, null, null,
         null, null, null, null, null, null) {
       @Override
       public ConcurrentMap<ApplicationId, RMApp> getRMApps() {
         return applicationsMaps;
       }
+      @Override
+      public ResourceScheduler getScheduler() {
+        return mock(AbstractYarnScheduler.class);
+      }
     };
+    return rmContext;
   }
 
   private static ResourceManager mockRm(RMContext rmContext) throws

http://git-wip-us.apache.org/repos/asf/hadoop/blob/edcecedc/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
index 705fd31..c60a584 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
@@ -1314,8 +1314,7 @@ public class TestRMWebServicesApps extends JerseyTestBase {
   public void verifyAppInfo(JSONObject info, RMApp app) throws JSONException,
       Exception {
 
-    // 28 because trackingUrl not assigned yet
-    assertEquals("incorrect number of elements", 26, info.length());
+    assertEquals("incorrect number of elements", 27, info.length());
 
     verifyAppInfoGeneric(app, info.getString("id"), info.getString("user"),
         info.getString("name"), info.getString("applicationType"),


[25/43] hadoop git commit: YARN-3281. Added RMStateStore to StateMachine visualization list. Contributed by Chengbing Liu

Posted by zj...@apache.org.
YARN-3281. Added RMStateStore to StateMachine visualization list. Contributed by Chengbing Liu


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5d0bae55
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5d0bae55
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5d0bae55

Branch: refs/heads/YARN-2928
Commit: 5d0bae550f5b9a6005aa1d373cfe1ec80513dbd9
Parents: ca1c00b
Author: Jian He <ji...@apache.org>
Authored: Mon Mar 2 14:39:49 2015 -0800
Committer: Jian He <ji...@apache.org>
Committed: Mon Mar 2 14:39:49 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                               | 3 +++
 .../hadoop-yarn-server-resourcemanager/pom.xml                | 7 ++++---
 2 files changed, 7 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d0bae55/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index cef1758..c7dac60 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -345,6 +345,9 @@ Release 2.7.0 - UNRELEASED
     YARN-3262. Surface application outstanding resource requests table 
     in RM web UI. (Jian He via wangda)
 
+    YARN-3281. Added RMStateStore to StateMachine visualization list.
+    (Chengbing Liu via jianhe)
+
   OPTIMIZATIONS
 
     YARN-2990. FairScheduler's delay-scheduling always waits for node-local and 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/5d0bae55/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
index ff429cc..aaa0de5 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/pom.xml
@@ -278,7 +278,7 @@
               <source>
                 <directory>${basedir}/src/main/proto</directory>
                 <includes>
-		          <include>yarn_server_resourcemanager_recovery.proto</include>
+                  <include>yarn_server_resourcemanager_recovery.proto</include>
                 </includes>
               </source>
               <output>${project.build.directory}/generated-sources/java</output>
@@ -331,10 +331,11 @@
                 </goals>
                 <configuration>
                   <mainClass>org.apache.hadoop.yarn.state.VisualizeStateMachine</mainClass>
-		  <classpathScope>compile</classpathScope>
+                  <classpathScope>compile</classpathScope>
                   <arguments>
                     <argument>ResourceManager</argument>
-                    <argument>org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl,
+                    <argument>org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore,
+                      org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl,
                       org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl,
                       org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl,
                       org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl</argument>


[20/43] hadoop git commit: HADOOP-11657. Align the output of `hadoop fs -du` to be more Unix-like. (aajisaka)

Posted by zj...@apache.org.
HADOOP-11657. Align the output of `hadoop fs -du` to be more Unix-like. (aajisaka)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/30e73ebc
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/30e73ebc
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/30e73ebc

Branch: refs/heads/YARN-2928
Commit: 30e73ebc77654ff941bcae5b6fb11d52c6d74d2e
Parents: e9ac88a
Author: Akira Ajisaka <aa...@apache.org>
Authored: Sun Mar 1 21:09:15 2015 -0800
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Sun Mar 1 21:09:15 2015 -0800

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 .../org/apache/hadoop/fs/shell/FsUsage.java     | 12 ++++++--
 .../org/apache/hadoop/hdfs/TestDFSShell.java    | 29 ++++++++++++++++++++
 3 files changed, 42 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e73ebc/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index f1d48bc..b1a7a7d 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -13,6 +13,9 @@ Trunk (Unreleased)
 
     HADOOP-10950. rework heap management vars (John Smith via aw)
 
+    HADOOP-11657. Align the output of `hadoop fs -du` to be more Unix-like.
+    (aajisaka)
+
   NEW FEATURES
 
     HADOOP-6590. Add a username check for hadoop sub-commands (John Smith via aw)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e73ebc/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
index 5c1dbf0..765b181 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/FsUsage.java
@@ -132,15 +132,23 @@ class FsUsage extends FsCommand {
     }
 
     @Override
-    protected void processPathArgument(PathData item) throws IOException {
+    protected void processArguments(LinkedList<PathData> args)
+        throws IOException {
       usagesTable = new TableBuilder(3);
+      super.processArguments(args);
+      if (!usagesTable.isEmpty()) {
+        usagesTable.printToStream(out);
+      }
+    }
+
+    @Override
+    protected void processPathArgument(PathData item) throws IOException {
       // go one level deep on dirs from cmdline unless in summary mode
       if (!summary && item.stat.isDirectory()) {
         recursePath(item);
       } else {
         super.processPathArgument(item);
       }
-      usagesTable.printToStream(out);
     }
 
     @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/30e73ebc/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
index ee04076..0a88208 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSShell.java
@@ -95,6 +95,14 @@ public class TestDFSShell {
     return f;
   }
 
+  static Path writeByte(FileSystem fs, Path f) throws IOException {
+    DataOutputStream out = fs.create(f);
+    out.writeByte(1);
+    out.close();
+    assertTrue(fs.exists(f));
+    return f;
+  }
+
   static Path mkdir(FileSystem fs, Path p) throws IOException {
     assertTrue(fs.mkdirs(p));
     assertTrue(fs.exists(p));
@@ -272,6 +280,27 @@ public class TestDFSShell {
       Long combinedDiskUsed = myFileDiskUsed + myFile2DiskUsed;
       assertThat(returnString, containsString(combinedLength.toString()));
       assertThat(returnString, containsString(combinedDiskUsed.toString()));
+
+      // Check if output is rendered properly with multiple input paths
+      Path myFile3 = new Path("/test/dir/file3");
+      writeByte(fs, myFile3);
+      assertTrue(fs.exists(myFile3));
+      args = new String[3];
+      args[0] = "-du";
+      args[1] = "/test/dir/file3";
+      args[2] = "/test/dir/file2";
+      val = -1;
+      try {
+        val = shell.run(args);
+      } catch (Exception e) {
+        System.err.println("Exception raised from DFSShell.run " +
+            e.getLocalizedMessage());
+      }
+      assertEquals("Return code should be 0.", 0, val);
+      returnString = out.toString();
+      out.reset();
+      assertTrue(returnString.contains("1   2   /test/dir/file3"));
+      assertTrue(returnString.contains("23  46  /test/dir/file2"));
     } finally {
       System.setOut(psBackup);
       cluster.shutdown();


[37/43] hadoop git commit: MAPREDUCE-5657. Fix Javadoc errors caused by incorrect or illegal tags in doc comments. Contributed by Akira AJISAKA.

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
index fa3708e..2c69542 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/FileOutputFormat.java
@@ -181,7 +181,7 @@ public static final String OUTDIR = "mapreduce.output.fileoutputformat.outputdir
    *  Get the {@link Path} to the task's temporary output directory 
    *  for the map-reduce job
    *  
-   * <h4 id="SideEffectFiles">Tasks' Side-Effect Files</h4>
+   * <b id="SideEffectFiles">Tasks' Side-Effect Files</b>
    * 
    * <p>Some applications need to create/write-to side-files, which differ from
    * the actual job-outputs.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java
index 24baa59..c31cab7 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/MultipleOutputs.java
@@ -81,7 +81,7 @@ import java.util.*;
  * <p>
  * Usage in Reducer:
  * <pre>
- * <K, V> String generateFileName(K k, V v) {
+ * &lt;K, V&gt; String generateFileName(K k, V v) {
  *   return k.toString() + "_" + v.toString();
  * }
  * 
@@ -124,16 +124,16 @@ import java.util.*;
  * </p>
  * 
  * <pre>
- * private MultipleOutputs<Text, Text> out;
+ * private MultipleOutputs&lt;Text, Text&gt; out;
  * 
  * public void setup(Context context) {
- *   out = new MultipleOutputs<Text, Text>(context);
+ *   out = new MultipleOutputs&lt;Text, Text&gt;(context);
  *   ...
  * }
  * 
- * public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
+ * public void reduce(Text key, Iterable&lt;Text&gt; values, Context context) throws IOException, InterruptedException {
  * for (Text t : values) {
- *   out.write(key, t, generateFileName(<<i>parameter list...</i>>));
+ *   out.write(key, t, generateFileName(&lt;<i>parameter list...</i>&gt;));
  *   }
  * }
  * 
@@ -294,7 +294,6 @@ public class MultipleOutputs<KEYOUT, VALUEOUT> {
 
   /**
    * Adds a named output for the job.
-   * <p/>
    *
    * @param job               job to add the named output
    * @param namedOutput       named output name, it has to be a word, letters

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java
index 4a40840..2a89908 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/partition/BinaryPartitioner.java
@@ -64,7 +64,7 @@ import org.apache.hadoop.mapreduce.Partitioner;
  *   <li>{@link #setOffsets}</li>
  *   <li>{@link #setLeftOffset}</li>
  *   <li>{@link #setRightOffset}</li>
- * </ul></p>
+ * </ul>
  */
 @InterfaceAudience.Public
 @InterfaceStability.Evolving

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java
index 247c2f2..b9014ef 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/task/JobContextImpl.java
@@ -374,7 +374,6 @@ public class JobContextImpl implements JobContext {
    * Get the timestamps of the archives.  Used by internal
    * DistributedCache and MapReduce code.
    * @return a string array of timestamps 
-   * @throws IOException
    */
   public String[] getArchiveTimestamps() {
     return toTimestampStrs(DistributedCache.getArchiveTimestamps(conf));
@@ -384,7 +383,6 @@ public class JobContextImpl implements JobContext {
    * Get the timestamps of the files.  Used by internal
    * DistributedCache and MapReduce code.
    * @return a string array of timestamps 
-   * @throws IOException
    */
   public String[] getFileTimestamps() {
     return toTimestampStrs(DistributedCache.getFileTimestamps(conf));

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomTextWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomTextWriter.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomTextWriter.java
index 40e101a..6cb3211 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomTextWriter.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomTextWriter.java
@@ -42,7 +42,7 @@ import org.apache.hadoop.util.ToolRunner;
  * random sequence of words.
  * In order for this program to generate data for terasort with a 5-10 words
  * per key and 20-100 words per value, have the following config:
- * <xmp>
+ * <pre>{@code
  * <?xml version="1.0"?>
  * <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  * <configuration>
@@ -66,7 +66,7 @@ import org.apache.hadoop.util.ToolRunner;
  *     <name>mapreduce.randomtextwriter.totalbytes</name>
  *     <value>1099511627776</value>
  *   </property>
- * </configuration></xmp>
+ * </configuration>}</pre>
  * 
  * Equivalently, {@link RandomTextWriter} also supports all the above options
  * and ones supported by {@link Tool} via the command-line.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomWriter.java b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomWriter.java
index a326c8c..67c9ca8 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomWriter.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/RandomWriter.java
@@ -47,7 +47,7 @@ import org.apache.hadoop.util.ToolRunner;
  * random binary sequence file of BytesWritable.
  * In order for this program to generate data for terasort with 10-byte keys
  * and 90-byte values, have the following config:
- * <xmp>
+ * <pre>{@code
  * <?xml version="1.0"?>
  * <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  * <configuration>
@@ -71,8 +71,7 @@ import org.apache.hadoop.util.ToolRunner;
  *     <name>mapreduce.randomwriter.totalbytes</name>
  *     <value>1099511627776</value>
  *   </property>
- * </configuration></xmp>
- * 
+ * </configuration>}</pre>
  * Equivalently, {@link RandomWriter} also supports all the above options
  * and ones supported by {@link GenericOptionsParser} via the command-line.
  */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/MultiFileWordCount.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/MultiFileWordCount.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/MultiFileWordCount.java
index d3df4b3..b51946e 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/MultiFileWordCount.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/MultiFileWordCount.java
@@ -199,7 +199,7 @@ public class MultiFileWordCount extends Configured implements Tool {
   }
 
   /**
-   * This Mapper is similar to the one in {@link WordCount.MapClass}.
+   * This Mapper is similar to the one in {@link WordCount.TokenizerMapper}.
    */
   public static class MapClass extends 
       Mapper<WordOffset, Text, Text, IntWritable> {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/QuasiMonteCarlo.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/QuasiMonteCarlo.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/QuasiMonteCarlo.java
index d565098..25dee6b 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/QuasiMonteCarlo.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/QuasiMonteCarlo.java
@@ -50,7 +50,7 @@ import org.apache.hadoop.util.ToolRunner;
  * where $S=[0,1)^2$ is a unit square,
  * $x=(x_1,x_2)$ is a 2-dimensional point,
  * and $f$ is a function describing the inscribed circle of the square $S$,
- * $f(x)=1$ if $(2x_1-1)^2+(2x_2-1)^2 <= 1$ and $f(x)=0$, otherwise.
+ * $f(x)=1$ if $(2x_1-1)^2+(2x_2-1)^2 &lt;= 1$ and $f(x)=0$, otherwise.
  * It is easy to see that Pi is equal to $4I$.
  * So an approximation of Pi is obtained once $I$ is evaluated numerically.
  * 
@@ -155,7 +155,7 @@ public class QuasiMonteCarlo extends Configured implements Tool {
     /** Map method.
      * @param offset samples starting from the (offset+1)th sample.
      * @param size the number of samples for this map
-     * @param context output {ture->numInside, false->numOutside}
+     * @param context output {ture-&gt;numInside, false-&gt;numOutside}
      */
     public void map(LongWritable offset,
                     LongWritable size,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomTextWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomTextWriter.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomTextWriter.java
index 4d555c6..6309ee6 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomTextWriter.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomTextWriter.java
@@ -42,7 +42,7 @@ import org.apache.hadoop.util.ToolRunner;
  * random sequence of words.
  * In order for this program to generate data for terasort with a 5-10 words
  * per key and 20-100 words per value, have the following config:
- * <xmp>
+ * <pre>{@code
  * <?xml version="1.0"?>
  * <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  * <configuration>
@@ -66,7 +66,7 @@ import org.apache.hadoop.util.ToolRunner;
  *     <name>mapreduce.randomtextwriter.totalbytes</name>
  *     <value>1099511627776</value>
  *   </property>
- * </configuration></xmp>
+ * </configuration>}</pre>
  * 
  * Equivalently, {@link RandomTextWriter} also supports all the above options
  * and ones supported by {@link Tool} via the command-line.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomWriter.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomWriter.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomWriter.java
index e1c13ec..8f322b1 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomWriter.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/RandomWriter.java
@@ -47,7 +47,7 @@ import org.apache.hadoop.util.ToolRunner;
  * random binary sequence file of BytesWritable.
  * In order for this program to generate data for terasort with 10-byte keys
  * and 90-byte values, have the following config:
- * <xmp>
+ * <pre>{@code
  * <?xml version="1.0"?>
  * <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  * <configuration>
@@ -71,8 +71,7 @@ import org.apache.hadoop.util.ToolRunner;
  *     <name>mapreduce.randomwriter.totalbytes</name>
  *     <value>1099511627776</value>
  *   </property>
- * </configuration></xmp>
- * 
+ * </configuration>}</pre>
  * Equivalently, {@link RandomWriter} also supports all the above options
  * and ones supported by {@link GenericOptionsParser} via the command-line.
  */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/SecondarySort.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/SecondarySort.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/SecondarySort.java
index d536ec9..8841fdc 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/SecondarySort.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/SecondarySort.java
@@ -74,7 +74,7 @@ public class SecondarySort {
     }
     /**
      * Read the two integers. 
-     * Encoded as: MIN_VALUE -> 0, 0 -> -MIN_VALUE, MAX_VALUE-> -1
+     * Encoded as: MIN_VALUE -&gt; 0, 0 -&gt; -MIN_VALUE, MAX_VALUE-&gt; -1
      */
     @Override
     public void readFields(DataInput in) throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistBbp.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistBbp.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistBbp.java
index 4484d20..268066c 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistBbp.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/DistBbp.java
@@ -35,7 +35,7 @@ import org.apache.hadoop.util.ToolRunner;
  * A map/reduce program that uses a BBP-type method to compute exact 
  * binary digits of Pi.
  * This program is designed for computing the n th bit of Pi,
- * for large n, say n >= 10^8.
+ * for large n, say n &gt;= 10^8.
  * For computing lower bits of Pi, consider using bbp.
  *
  * The actually computation is done by DistSum jobs.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/math/Modular.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/math/Modular.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/math/Modular.java
index 58f859d..1c039a2 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/math/Modular.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/pi/math/Modular.java
@@ -78,7 +78,7 @@ public class Modular {
     return x >= 1? x - 1: x < 0? x + 1: x;
   }
 
-  /** Given 0 < x < y,
+  /** Given 0 &lt; x &lt; y,
    * return x^(-1) mod y.
    */
   public static long modInverse(final long x, final long y) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/GenSort.java
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/GenSort.java b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/GenSort.java
index 94f9baa..beb0743 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/GenSort.java
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/GenSort.java
@@ -28,7 +28,7 @@ import org.apache.hadoop.util.PureJavaCrc32;
 
 /** 
  * A single process data generator for the terasort data. Based on gensort.c 
- * version 1.1 (3 Mar 2009) from Chris Nyberg <ch...@ordinal.com>.
+ * version 1.1 (3 Mar 2009) from Chris Nyberg &lt;chris.nyberg@ordinal.com&gt;.
  */
 public class GenSort {
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
index ab5b802..a7b68a9 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java
@@ -38,10 +38,10 @@ import com.google.common.collect.Sets;
 /**
  * The CopyListing abstraction is responsible for how the list of
  * sources and targets is constructed, for DistCp's copy function.
- * The copy-listing should be a SequenceFile<Text, CopyListingFileStatus>,
- * located at the path specified to buildListing(),
- * each entry being a pair of (Source relative path, source file status),
- * all the paths being fully qualified.
+ * The copy-listing should be a
+ * SequenceFile&lt;Text, CopyListingFileStatus&gt;, located at the path
+ * specified to buildListing(), each entry being a pair of (Source relative
+ * path, source file status), all the paths being fully qualified.
  */
 public abstract class CopyListing extends Configured {
 
@@ -95,8 +95,8 @@ public abstract class CopyListing extends Configured {
    * Validate input and output paths
    *
    * @param options - Input options
-   * @throws InvalidInputException: If inputs are invalid
-   * @throws IOException: any Exception with FS 
+   * @throws InvalidInputException If inputs are invalid
+   * @throws IOException any Exception with FS
    */
   protected abstract void validatePaths(DistCpOptions options)
       throws IOException, InvalidInputException;
@@ -105,7 +105,7 @@ public abstract class CopyListing extends Configured {
    * The interface to be implemented by sub-classes, to create the source/target file listing.
    * @param pathToListFile Path on HDFS where the listing file is written.
    * @param options Input Options for DistCp (indicating source/target paths.)
-   * @throws IOException: Thrown on failure to create the listing file.
+   * @throws IOException Thrown on failure to create the listing file.
    */
   protected abstract void doBuildListing(Path pathToListFile,
                                          DistCpOptions options) throws IOException;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
index d202f0a..28535a7 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java
@@ -76,7 +76,7 @@ public class DistCp extends Configured implements Tool {
    * (E.g. source-paths, target-location, etc.)
    * @param inputOptions Options (indicating source-paths, target-location.)
    * @param configuration The Hadoop configuration against which the Copy-mapper must run.
-   * @throws Exception, on failure.
+   * @throws Exception
    */
   public DistCp(Configuration configuration, DistCpOptions inputOptions) throws Exception {
     Configuration config = new Configuration(configuration);
@@ -142,7 +142,7 @@ public class DistCp extends Configured implements Tool {
    * Implements the core-execution. Creates the file-list for copy,
    * and launches the Hadoop-job, to do the copy.
    * @return Job handle
-   * @throws Exception, on failure.
+   * @throws Exception
    */
   public Job execute() throws Exception {
     assert inputOptions != null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
index d263f82..159d5ca 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java
@@ -105,7 +105,7 @@ public enum DistCpOptionSwitch {
    * Copy all the source files and commit them atomically to the target
    * This is typically useful in cases where there is a process
    * polling for availability of a file/dir. This option is incompatible
-   * with SYNC_FOLDERS & DELETE_MISSING
+   * with SYNC_FOLDERS and DELETE_MISSING
    */
   ATOMIC_COMMIT(DistCpConstants.CONF_LABEL_ATOMIC_COPY,
       new Option("atomic", false, "Commit all changes or none")),

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
index 4bbc30d..525136c 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java
@@ -63,7 +63,7 @@ public class OptionsParser {
    * @param args Command-line arguments (excluding the options consumed
    *              by the GenericOptionsParser).
    * @return The Options object, corresponding to the specified command-line.
-   * @throws IllegalArgumentException: Thrown if the parse fails.
+   * @throws IllegalArgumentException Thrown if the parse fails.
    */
   public static DistCpOptions parse(String args[]) throws IllegalArgumentException {
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
index 197edd9..d5fdd7f 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
@@ -70,7 +70,7 @@ public class CopyCommitter extends FileOutputCommitter {
     this.taskAttemptContext = context;
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public void commitJob(JobContext jobContext) throws IOException {
     Configuration conf = jobContext.getConfiguration();
@@ -102,7 +102,7 @@ public class CopyCommitter extends FileOutputCommitter {
     }
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public void abortJob(JobContext jobContext,
                        JobStatus.State state) throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
index ab57127..cca36df 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java
@@ -45,7 +45,7 @@ import org.apache.hadoop.util.StringUtils;
 
 /**
  * Mapper class that executes the DistCp copy operation.
- * Implements the o.a.h.mapreduce.Mapper<> interface.
+ * Implements the o.a.h.mapreduce.Mapper interface.
  */
 public class CopyMapper extends Mapper<Text, CopyListingFileStatus, Text, Text> {
 
@@ -182,10 +182,11 @@ public class CopyMapper extends Mapper<Text, CopyListingFileStatus, Text, Text>
   }
 
   /**
-   * Implementation of the Mapper<>::map(). Does the copy.
+   * Implementation of the Mapper::map(). Does the copy.
    * @param relPath The target path.
    * @param sourceFileStatus The source path.
    * @throws IOException
+   * @throws InterruptedException
    */
   @Override
   public void map(Text relPath, CopyListingFileStatus sourceFileStatus,

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyOutputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyOutputFormat.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyOutputFormat.java
index eb43aa3..a5bd605 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyOutputFormat.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyOutputFormat.java
@@ -97,13 +97,13 @@ public class CopyOutputFormat<K, V> extends TextOutputFormat<K, V> {
     }
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public OutputCommitter getOutputCommitter(TaskAttemptContext context) throws IOException {
     return new CopyCommitter(getOutputPath(context), context);
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public void checkOutputSpecs(JobContext context) throws IOException {
     Configuration conf = context.getConfiguration();

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
index 1d61156..65d644b 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/RetriableFileCopyCommand.java
@@ -84,8 +84,7 @@ public class RetriableFileCopyCommand extends RetriableCommand {
    * This is the actual copy-implementation.
    * @param arguments Argument-list to the command.
    * @return Number of bytes copied.
-   * @throws Exception: CopyReadException, if there are read-failures. All other
-   *         failures are IOExceptions.
+   * @throws Exception
    */
   @SuppressWarnings("unchecked")
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/UniformSizeInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/UniformSizeInputFormat.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/UniformSizeInputFormat.java
index 4add0bb..8dc7a65 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/UniformSizeInputFormat.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/UniformSizeInputFormat.java
@@ -38,7 +38,7 @@ import java.util.List;
 import java.util.ArrayList;
 
 /**
- * UniformSizeInputFormat extends the InputFormat<> class, to produce
+ * UniformSizeInputFormat extends the InputFormat class, to produce
  * input-splits for DistCp.
  * It looks at the copy-listing and groups the contents into input-splits such
  * that the total-number of bytes to be copied for each input split is
@@ -55,7 +55,7 @@ public class UniformSizeInputFormat
    * approximately equal.
    * @param context JobContext for the job.
    * @return The list of uniformly-distributed input-splits.
-   * @throws IOException: On failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
index f5303d5..38269c7 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicInputFormat.java
@@ -64,7 +64,7 @@ public class DynamicInputFormat<K, V> extends InputFormat<K, V> {
    * tasks.
    * @param jobContext JobContext for the map job.
    * @return The list of (empty) dynamic input-splits.
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override
@@ -343,7 +343,7 @@ public class DynamicInputFormat<K, V> extends InputFormat<K, V> {
    * @param inputSplit The split for which the RecordReader is required.
    * @param taskAttemptContext TaskAttemptContext for the current attempt.
    * @return DynamicRecordReader instance.
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicRecordReader.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicRecordReader.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicRecordReader.java
index 40d75f4..00b3c69 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicRecordReader.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/lib/DynamicRecordReader.java
@@ -57,7 +57,7 @@ public class DynamicRecordReader<K, V> extends RecordReader<K, V> {
    * RecordReader to read from chunks.
    * @param inputSplit The InputSplit for the map. Ignored entirely.
    * @param taskAttemptContext The AttemptContext.
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override
@@ -88,7 +88,7 @@ public class DynamicRecordReader<K, V> extends RecordReader<K, V> {
    * been completely exhausted, an new chunk is acquired and read,
    * transparently.
    * @return True, if the nextValue() could be traversed to. False, otherwise.
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override
@@ -130,7 +130,7 @@ public class DynamicRecordReader<K, V> extends RecordReader<K, V> {
   /**
    * Implementation of RecordReader::getCurrentKey().
    * @return The key of the current record. (i.e. the source-path.)
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override
@@ -142,7 +142,7 @@ public class DynamicRecordReader<K, V> extends RecordReader<K, V> {
   /**
    * Implementation of RecordReader::getCurrentValue().
    * @return The value of the current record. (i.e. the target-path.)
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override
@@ -154,7 +154,7 @@ public class DynamicRecordReader<K, V> extends RecordReader<K, V> {
   /**
    * Implementation of RecordReader::getProgress().
    * @return A fraction [0.0,1.0] indicating the progress of a DistCp mapper.
-   * @throws IOException, on failure.
+   * @throws IOException
    * @throws InterruptedException
    */
   @Override
@@ -192,7 +192,7 @@ public class DynamicRecordReader<K, V> extends RecordReader<K, V> {
   /**
    * Implementation of RecordReader::close().
    * Closes the RecordReader.
-   * @throws IOException, on failure.
+   * @throws IOException
    */
   @Override
   public void close()

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
index ca7566b..20fdf11 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java
@@ -65,7 +65,7 @@ public class DistCpUtils {
    * @param path The path of the file whose size is sought.
    * @param configuration Configuration, to retrieve the appropriate FileSystem.
    * @return The file-size, in number of bytes.
-   * @throws IOException, on failure.
+   * @throws IOException
    */
   public static long getFileSize(Path path, Configuration configuration)
                                             throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java
index 563372e..c27b2e1 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/RetriableCommand.java
@@ -77,7 +77,7 @@ public abstract class RetriableCommand {
    *  2. the command may no longer be retried (e.g. runs out of retry-attempts).
    * @param arguments The list of arguments for the command.
    * @return Generic "Object" from doExecute(), on success.
-   * @throws IOException, IOException, on complete failure.
+   * @throws Exception
    */
   public Object execute(Object... arguments) throws Exception {
     Exception latestException;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
index d08a301..9e435d9 100644
--- a/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
+++ b/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java
@@ -62,7 +62,7 @@ public class ThrottledInputStream extends InputStream {
     rawStream.close();
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public int read() throws IOException {
     throttle();
@@ -73,7 +73,7 @@ public class ThrottledInputStream extends InputStream {
     return data;
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public int read(byte[] b) throws IOException {
     throttle();
@@ -84,7 +84,7 @@ public class ThrottledInputStream extends InputStream {
     return readLen;
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public int read(byte[] b, int off, int len) throws IOException {
     throttle();
@@ -155,7 +155,7 @@ public class ThrottledInputStream extends InputStream {
     return totalSleepTime;
   }
 
-  /** @inheritDoc */
+  /** {@inheritDoc} */
   @Override
   public String toString() {
     return "ThrottledInputStream{" +

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java b/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
index 050bfbe..449ecbf 100644
--- a/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
+++ b/hadoop-tools/hadoop-extras/src/main/java/org/apache/hadoop/tools/Logalyzer.java
@@ -60,7 +60,9 @@ import org.apache.hadoop.mapreduce.lib.map.RegexMapper;
  *  b) Directory on dfs to archive the logs. 
  *  c) The sort/grep patterns for analyzing the files and separator for boundaries.
  * Usage: 
- * Logalyzer -archive -archiveDir <directory to archive logs> -analysis <directory> -logs <log-list uri> -grep <pattern> -sort <col1, col2> -separator <separator>   
+ * Logalyzer -archive -archiveDir &lt;directory to archive logs&gt; -analysis
+ * &lt;directory&gt; -logs &lt;log-list uri&gt; -grep &lt;pattern&gt; -sort
+ * &lt;col1, col2&gt; -separator &lt;separator&gt;
  * <p>
  */
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/emulators/resourceusage/ResourceUsageEmulatorPlugin.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/emulators/resourceusage/ResourceUsageEmulatorPlugin.java b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/emulators/resourceusage/ResourceUsageEmulatorPlugin.java
index 593c1a4..7a80e8d 100644
--- a/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/emulators/resourceusage/ResourceUsageEmulatorPlugin.java
+++ b/hadoop-tools/hadoop-gridmix/src/main/java/org/apache/hadoop/mapred/gridmix/emulators/resourceusage/ResourceUsageEmulatorPlugin.java
@@ -35,7 +35,7 @@ import org.apache.hadoop.conf.Configuration;
  * {@link ResourceUsageEmulatorPlugin} is also configured with a feedback module
  * i.e a {@link ResourceCalculatorPlugin}, to monitor the current resource 
  * usage. {@link ResourceUsageMetrics} decides the final resource usage value to
- * emulate. {@link Progressive} keeps track of the task's progress.</p>
+ * emulate. {@link Progressive} keeps track of the task's progress.
  * 
  * <br><br>
  * 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java
index 25a7e93..d11c369 100644
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java
+++ b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/RestClientBindings.java
@@ -31,10 +31,10 @@ import static org.apache.hadoop.fs.swift.http.SwiftProtocolConstants.*;
 /**
  * This class implements the binding logic between Hadoop configurations
  * and the swift rest client.
- * <p/>
+ * <p>
  * The swift rest client takes a Properties instance containing
  * the string values it uses to bind to a swift endpoint.
- * <p/>
+ * <p>
  * This class extracts the values for a specific filesystem endpoint
  * and then builds an appropriate Properties file.
  */
@@ -188,7 +188,7 @@ public final class RestClientBindings {
 
   /**
    * Copy a (trimmed) property from the configuration file to the properties file.
-   * <p/>
+   * <p>
    * If marked as required and not found in the configuration, an
    * exception is raised.
    * If not required -and missing- then the property will not be set.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java
index 28f8b47..55dad11 100644
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java
+++ b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/http/SwiftRestClient.java
@@ -1061,10 +1061,9 @@ public final class SwiftRestClient {
    * Authenticate to Openstack Keystone
    * As well as returning the access token, the member fields {@link #token},
    * {@link #endpointURI} and {@link #objectLocationURI} are set up for re-use.
-   * <p/>
+   * <p>
    * This method is re-entrant -if more than one thread attempts to authenticate
    * neither will block -but the field values with have those of the last caller.
-   * <p/>
    *
    * @return authenticated access token
    */
@@ -1575,6 +1574,7 @@ public final class SwiftRestClient {
    * @param path path to object
    * @param endpointURI damain url e.g. http://domain.com
    * @return valid URI for object
+   * @throws SwiftException
    */
   public static URI pathToURI(SwiftObjectPath path,
                               URI endpointURI) throws SwiftException {
@@ -1820,7 +1820,7 @@ public final class SwiftRestClient {
 
   /**
    * Get the blocksize of this filesystem
-   * @return a blocksize >0
+   * @return a blocksize &gt; 0
    */
   public long getBlocksizeKB() {
     return blocksizeKB;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
index b70f7ef..27a572f 100644
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
+++ b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystem.java
@@ -225,10 +225,10 @@ public class SwiftNativeFileSystem extends FileSystem {
    * Return an array containing hostnames, offset and size of
    * portions of the given file.  For a nonexistent
    * file or regions, null will be returned.
-   * <p/>
+   * <p>
    * This call is most helpful with DFS, where it returns
    * hostnames of machines that contain the given file.
-   * <p/>
+   * <p>
    * The FileSystem will simply return an elt containing 'localhost'.
    */
   @Override
@@ -645,7 +645,7 @@ public class SwiftNativeFileSystem extends FileSystem {
   /**
    * Low level method to do a deep listing of all entries, not stopping
    * at the next directory entry. This is to let tests be confident that
-   * recursive deletes &c really are working.
+   * recursive deletes really are working.
    * @param path path to recurse down
    * @param newest ask for the newest data, potentially slower than not.
    * @return a potentially empty array of file status

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
index 0138eae..6d812a0 100644
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
+++ b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/snative/SwiftNativeFileSystemStore.java
@@ -518,7 +518,7 @@ public class SwiftNativeFileSystemStore {
    * Rename through copy-and-delete. this is a consequence of the
    * Swift filesystem using the path as the hash
    * into the Distributed Hash Table, "the ring" of filenames.
-   * <p/>
+   * <p>
    * Because of the nature of the operation, it is not atomic.
    *
    * @param src source file/dir
@@ -847,7 +847,7 @@ public class SwiftNativeFileSystemStore {
   }
 
   /**
-   * Insert a throttled wait if the throttle delay >0
+   * Insert a throttled wait if the throttle delay &gt; 0
    * @throws InterruptedIOException if interrupted during sleep
    */
   public void throttle() throws InterruptedIOException {
@@ -878,7 +878,7 @@ public class SwiftNativeFileSystemStore {
    * raised. This lets the caller distinguish a file not found with
    * other reasons for failure, so handles race conditions in recursive
    * directory deletes better.
-   * <p/>
+   * <p>
    * The problem being addressed is: caller A requests a recursive directory
    * of directory /dir ; caller B requests a delete of a file /dir/file,
    * between caller A enumerating the files contents, and requesting a delete

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
index c9e26ac..01ec739 100644
--- a/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
+++ b/hadoop-tools/hadoop-openstack/src/main/java/org/apache/hadoop/fs/swift/util/SwiftTestUtils.java
@@ -236,7 +236,7 @@ public class SwiftTestUtils extends org.junit.Assert {
 
   /**
    * Convert a byte to a character for printing. If the
-   * byte value is < 32 -and hence unprintable- the byte is
+   * byte value is &lt; 32 -and hence unprintable- the byte is
    * returned as a two digit hex value
    * @param b byte
    * @return the printable character string

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/InputDemuxer.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/InputDemuxer.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/InputDemuxer.java
index cd99e1c..0927a77 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/InputDemuxer.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/InputDemuxer.java
@@ -45,12 +45,12 @@ public interface InputDemuxer extends Closeable {
   public void bindTo(Path path, Configuration conf) throws IOException;
 
   /**
-   * Get the next <name, input> pair. The name should preserve the original job
+   * Get the next &lt;name, input&gt; pair. The name should preserve the original job
    * history file or job conf file name. The input object should be closed
    * before calling getNext() again. The old input object would be invalid after
    * calling getNext() again.
    * 
-   * @return the next <name, input> pair.
+   * @return the next &lt;name, input&gt; pair.
    */
   public Pair<String, InputStream> getNext() throws IOException;
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
index c2537be..7547eca 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/datatypes/util/MapReduceJobPropertiesParser.java
@@ -67,8 +67,9 @@ import org.apache.log4j.Logger;
  * ignoring user-specific and hard-to-parse keys but also provides a consistent
  * view for all possible inputs. So if users invoke the 
  * {@link #parseJobProperty(String, String)} API with either
- * <"mapreduce.job.user.name", "bob"> or <"user.name", "bob">, then the result 
- * would be a {@link UserName} {@link DataType} wrapping the user-name "bob".
+ * &lt;"mapreduce.job.user.name", "bob"&gt; or &lt;"user.name", "bob"&gt;,
+ * then the result would be a {@link UserName} {@link DataType} wrapping
+ * the user-name "bob".
  */
 @SuppressWarnings("deprecation")
 public class MapReduceJobPropertiesParser implements JobPropertyParser {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/9ae7f9eb/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java
index b88b37e..2253225 100644
--- a/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java
+++ b/hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/package-info.java
@@ -41,7 +41,7 @@
  *        String conf_filename = .. // assume the job configuration filename here
  *        
  *        // construct a list of interesting properties
- *        List<String> interestedProperties = new ArrayList<String>();
+ *        List&lt;String&gt; interestedProperties = new ArrayList&lt;String&gt;();
  *        interestedProperties.add("mapreduce.job.name");
  *        
  *        JobConfigurationParser jcp = 
@@ -154,7 +154,7 @@
  *        TopologyBuilder tb = new TopologyBuilder();
  *        
  *        // construct a list of interesting properties
- *        List<String> interestingProperties = new ArrayList<Strng>();
+ *        List&lt;String&gt; interestingProperties = new ArrayList%lt;String&gt;();
  *        // add the interesting properties here
  *        interestingProperties.add("mapreduce.job.name");
  *        
@@ -207,7 +207,7 @@
  *        JobBuilder jb = new JobBuilder(jobID);
  *        
  *        // construct a list of interesting properties
- *        List<String> interestingProperties = new ArrayList<Strng>();
+ *        List&lt;String&gt; interestingProperties = new ArrayList%lt;String&gt;();
  *        // add the interesting properties here
  *        interestingProperties.add("mapreduce.job.name");
  *        
@@ -269,7 +269,7 @@
  *        TopologyBuilder tb = new TopologyBuilder();
  *        
  *        // construct a list of interesting properties
- *        List<String> interestingProperties = new ArrayList<Strng>();
+ *        List&lt;String&gt; interestingProperties = new ArrayList%lt;String&gt;();
  *        // add the interesting properties here
  *        interestingProperties.add("mapreduce.job.name");
  *        


[30/43] hadoop git commit: HDFS-7302. Remove "downgrade" from "namenode -rollingUpgrade" startup option since it may incorrectly finalize an ongoing rolling upgrade. Contributed by Kai Sasaki

Posted by zj...@apache.org.
HDFS-7302. Remove "downgrade" from "namenode -rollingUpgrade" startup option since it may incorrectly finalize an ongoing rolling upgrade.
    Contributed by Kai Sasaki


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/431e7d84
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/431e7d84
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/431e7d84

Branch: refs/heads/YARN-2928
Commit: 431e7d84c7b68b34ff18de19afe8e46637047fa6
Parents: 14dd647
Author: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Authored: Tue Mar 3 10:04:08 2015 +0800
Committer: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Committed: Tue Mar 3 10:04:08 2015 +0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt             |  4 ++++
 .../hadoop/hdfs/server/common/HdfsServerConstants.java  | 10 +++++++++-
 .../hadoop/hdfs/server/namenode/FSEditLogLoader.java    |  3 ---
 .../org/apache/hadoop/hdfs/server/namenode/FSImage.java |  4 ----
 .../hadoop/hdfs/server/namenode/FSNamesystem.java       |  3 +--
 .../hadoop-hdfs/src/site/markdown/HDFSCommands.md       |  2 +-
 .../hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml    | 11 +++++------
 .../apache/hadoop/hdfs/TestRollingUpgradeDowngrade.java | 12 ++++++++----
 .../hdfs/server/datanode/TestHdfsServerConstants.java   |  3 ---
 .../hdfs/server/namenode/TestNameNodeOptionParsing.java |  8 --------
 10 files changed, 28 insertions(+), 32 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 43505d7..52e5d3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -14,6 +14,10 @@ Trunk (Unreleased)
 
     HDFS-2538. option to disable fsck dots (Mohammad Kamrul Islam via aw)
 
+    HDFS-7302. Remove "downgrade" from "namenode -rollingUpgrade" startup
+    option since it may incorrectly finalize an ongoing rolling upgrade.
+    (Kai Sasaki via szetszwo)
+
   NEW FEATURES
 
     HDFS-3125. Add JournalService to enable Journal Daemon. (suresh)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
index 9bba2c9..ff64524 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java
@@ -49,7 +49,7 @@ public final class HdfsServerConstants {
 
   /** Startup options for rolling upgrade. */
   public static enum RollingUpgradeStartupOption{
-    ROLLBACK, DOWNGRADE, STARTED;
+    ROLLBACK, STARTED;
 
     public String getOptionString() {
       return StartupOption.ROLLINGUPGRADE.getName() + " "
@@ -64,6 +64,14 @@ public final class HdfsServerConstants {
     private static final RollingUpgradeStartupOption[] VALUES = values();
 
     static RollingUpgradeStartupOption fromString(String s) {
+      if ("downgrade".equalsIgnoreCase(s)) {
+        throw new IllegalArgumentException(
+            "The \"downgrade\" option is no longer supported"
+                + " since it may incorrectly finalize an ongoing rolling upgrade."
+                + " For downgrade instruction, please see the documentation"
+                + " (http://hadoop.apache.org/docs/current/hadoop-project-dist/"
+                + "hadoop-hdfs/HdfsRollingUpgrade.html#Downgrade).");
+      }
       for(RollingUpgradeStartupOption opt : VALUES) {
         if (opt.name().equalsIgnoreCase(s)) {
           return opt;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
index a09df82..51c167a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
@@ -793,9 +793,6 @@ public class FSEditLogLoader {
             = startOpt.getRollingUpgradeStartupOption(); 
         if (rollingUpgradeOpt == RollingUpgradeStartupOption.ROLLBACK) {
           throw new RollingUpgradeOp.RollbackException();
-        } else if (rollingUpgradeOpt == RollingUpgradeStartupOption.DOWNGRADE) {
-          //ignore upgrade marker
-          break;
         }
       }
       // start rolling upgrade

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
index 1aeb0b8..44c41d0 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
@@ -687,10 +687,6 @@ public class FSImage implements Closeable {
       long txnsAdvanced = loadEdits(editStreams, target, startOpt, recovery);
       needToSave |= needsResaveBasedOnStaleCheckpoint(imageFile.getFile(),
           txnsAdvanced);
-      if (RollingUpgradeStartupOption.DOWNGRADE.matches(startOpt)) {
-        // rename rollback image if it is downgrade
-        renameCheckpoint(NameNodeFile.IMAGE_ROLLBACK, NameNodeFile.IMAGE);
-      }
     } else {
       // Trigger the rollback for rolling upgrade. Here lastAppliedTxId equals
       // to the last txid in rollback fsimage.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index bbab09e..7cd194e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -972,8 +972,7 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
       MetaRecoveryContext recovery = startOpt.createRecoveryContext();
       final boolean staleImage
           = fsImage.recoverTransitionRead(startOpt, this, recovery);
-      if (RollingUpgradeStartupOption.ROLLBACK.matches(startOpt) ||
-          RollingUpgradeStartupOption.DOWNGRADE.matches(startOpt)) {
+      if (RollingUpgradeStartupOption.ROLLBACK.matches(startOpt)) {
         rollingUpgradeInfo = null;
       }
       final boolean needToSave = staleImage && !haEnabled && !isRollingUpgrade(); 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
index 0573158..191b5bc 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
@@ -435,7 +435,7 @@ Usage:
 | `-upgrade` `[-clusterid cid]` [`-renameReserved` \<k-v pairs\>] | Namenode should be started with upgrade option after the distribution of new Hadoop version. |
 | `-upgradeOnly` `[-clusterid cid]` [`-renameReserved` \<k-v pairs\>] | Upgrade the specified NameNode and then shutdown it. |
 | `-rollback` | Rollback the NameNode to the previous version. This should be used after stopping the cluster and distributing the old Hadoop version. |
-| `-rollingUpgrade` \<downgrade\|rollback\|started\> | See [Rolling Upgrade document](./HdfsRollingUpgrade.html#NameNode_Startup_Options) for the detail. |
+| `-rollingUpgrade` \<rollback\|started\> | See [Rolling Upgrade document](./HdfsRollingUpgrade.html#NameNode_Startup_Options) for the detail. |
 | `-finalize` | Finalize will remove the previous state of the files system. Recent upgrade will become permanent. Rollback option will not be available anymore. After finalization it shuts the NameNode down. |
 | `-importCheckpoint` | Loads image from a checkpoint directory and save it into the current one. Checkpoint dir is read from property fs.checkpoint.dir |
 | `-initializeSharedEdits` | Format a new shared edits dir and copy in enough edit log segments so that the standby NameNode can start up. |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml b/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
index 1053695..f2f3ebe 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsRollingUpgrade.xml
@@ -308,17 +308,13 @@
   <subsection name="NameNode Startup Options" id="dfsadminCommands">
 
   <h4><code>namenode -rollingUpgrade</code></h4>
-  <source>hdfs namenode -rollingUpgrade &lt;downgrade|rollback|started&gt;</source>
+  <source>hdfs namenode -rollingUpgrade &lt;rollback|started&gt;</source>
   <p>
     When a rolling upgrade is in progress,
     the <code>-rollingUpgrade</code> namenode startup option is used to specify
     various rolling upgrade options.
   </p>
     <ul><li>Options:<table>
-      <tr><td><code>downgrade</code></td>
-        <td>Restores the namenode back to the pre-upgrade release
-            and preserves the user data.</td>
-      </tr>
       <tr><td><code>rollback</code></td>
         <td>Restores the namenode back to the pre-upgrade release
             but also reverts the user data back to the pre-upgrade state.</td>
@@ -329,7 +325,10 @@
           with different layout versions during startup.</td>
       </tr>
     </table></li></ul>
-
+  <p>
+    <b>WARN: downgrade options is obsolete.</b>
+      It is not necessary to start namenode with downgrade options explicitly.
+  </p>
   </subsection>
 
   </section>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeDowngrade.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeDowngrade.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeDowngrade.java
index 22efd6e..189b5f5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeDowngrade.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeDowngrade.java
@@ -36,7 +36,11 @@ import org.junit.Test;
 
 public class TestRollingUpgradeDowngrade {
 
-  @Test(timeout = 300000)
+  /**
+   * Downgrade option is already obsolete. It should throw exception.
+   * @throws Exception
+   */
+  @Test(timeout = 300000, expected = IllegalArgumentException.class)
   public void testDowngrade() throws Exception {
     final Configuration conf = new HdfsConfiguration();
     MiniQJMHACluster cluster = null;
@@ -85,10 +89,10 @@ public class TestRollingUpgradeDowngrade {
   }
 
   /**
-   * Ensure that during downgrade the NN fails to load a fsimage with newer
-   * format.
+   * Ensure that restart namenode with downgrade option should throw exception
+   * because it has been obsolete.
    */
-  @Test(expected = IncorrectVersionException.class)
+  @Test(expected = IllegalArgumentException.class)
   public void testRejectNewFsImage() throws IOException {
     final Configuration conf = new Configuration();
     MiniDFSCluster cluster = null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestHdfsServerConstants.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestHdfsServerConstants.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestHdfsServerConstants.java
index 2e76b25..0f24c05 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestHdfsServerConstants.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestHdfsServerConstants.java
@@ -83,9 +83,6 @@ public class TestHdfsServerConstants {
     verifyStartupOptionResult("ROLLINGUPGRADE(ROLLBACK)",
                               StartupOption.ROLLINGUPGRADE,
                               RollingUpgradeStartupOption.ROLLBACK);
-    verifyStartupOptionResult("ROLLINGUPGRADE(DOWNGRADE)",
-                              StartupOption.ROLLINGUPGRADE,
-                              RollingUpgradeStartupOption.DOWNGRADE);
     verifyStartupOptionResult("ROLLINGUPGRADE(STARTED)",
         StartupOption.ROLLINGUPGRADE,
         RollingUpgradeStartupOption.STARTED);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/431e7d84/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeOptionParsing.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeOptionParsing.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeOptionParsing.java
index f540253..a3582ce 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeOptionParsing.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeOptionParsing.java
@@ -121,14 +121,6 @@ public class TestNameNodeOptionParsing {
     }
 
     {
-      final String[] args = {"-rollingUpgrade", "downgrade"};
-      final StartupOption opt = NameNode.parseArguments(args);
-      assertEquals(StartupOption.ROLLINGUPGRADE, opt);
-      assertEquals(RollingUpgradeStartupOption.DOWNGRADE, opt.getRollingUpgradeStartupOption());
-      assertTrue(RollingUpgradeStartupOption.DOWNGRADE.matches(opt));
-    }
-
-    {
       final String[] args = {"-rollingUpgrade", "rollback"};
       final StartupOption opt = NameNode.parseArguments(args);
       assertEquals(StartupOption.ROLLINGUPGRADE, opt);


[33/43] hadoop git commit: HADOOP-11602. Fix toUpperCase/toLowerCase to use Locale.ENGLISH. (ozawa)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
index 46b45f8..21d70b4 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ClientRMService.java
@@ -46,6 +46,7 @@ import org.apache.hadoop.security.UserGroupInformation.AuthenticationMethod;
 import org.apache.hadoop.security.authorize.PolicyProvider;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.service.AbstractService;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.ApplicationClientProtocol;
 import org.apache.hadoop.yarn.api.protocolrecords.ApplicationsRequestScope;
 import org.apache.hadoop.yarn.api.protocolrecords.CancelDelegationTokenRequest;
@@ -756,7 +757,7 @@ public class ClientRMService extends AbstractService implements
       if (applicationTypes != null && !applicationTypes.isEmpty()) {
         String appTypeToMatch = caseSensitive
             ? application.getApplicationType()
-            : application.getApplicationType().toLowerCase();
+            : StringUtils.toLowerCase(application.getApplicationType());
         if (!applicationTypes.contains(appTypeToMatch)) {
           continue;
         }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
index 230f9a9..d6e9e45 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/resource/ResourceWeights.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.resource;
 
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
+import org.apache.hadoop.util.StringUtils;
 
 @Private
 @Evolving
@@ -61,7 +62,7 @@ public class ResourceWeights {
         sb.append(", ");
       }
       ResourceType resourceType = ResourceType.values()[i];
-      sb.append(resourceType.name().toLowerCase());
+      sb.append(StringUtils.toLowerCase(resourceType.name()));
       sb.append(String.format(" weight=%.1f", getWeight(resourceType)));
     }
     sb.append(">");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
index 3528c2d..102e553 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
@@ -394,7 +394,7 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
   public QueueState getState(String queue) {
     String state = get(getQueuePrefix(queue) + STATE);
     return (state != null) ? 
-        QueueState.valueOf(state.toUpperCase()) : QueueState.RUNNING;
+        QueueState.valueOf(StringUtils.toUpperCase(state)) : QueueState.RUNNING;
   }
   
   public void setAccessibleNodeLabels(String queue, Set<String> labels) {
@@ -490,7 +490,7 @@ public class CapacitySchedulerConfiguration extends ReservationSchedulerConfigur
   }
   
   private static String getAclKey(QueueACL acl) {
-    return "acl_" + acl.toString().toLowerCase();
+    return "acl_" + StringUtils.toLowerCase(acl.toString());
   }
 
   public AccessControlList getAcl(String queue, QueueACL acl) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
index 32ef906..e477e6e 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairSchedulerConfiguration.java
@@ -28,6 +28,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience.Private;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.utils.BuilderUtils;
@@ -241,7 +242,7 @@ public class FairSchedulerConfiguration extends Configuration {
   public static Resource parseResourceConfigValue(String val)
       throws AllocationConfigurationException {
     try {
-      val = val.toLowerCase();
+      val = StringUtils.toLowerCase(val);
       int memory = findResource(val, "mb");
       int vcores = findResource(val, "vcores");
       return BuilderUtils.newResource(memory, vcores);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java
index cc28afc..bf2a25b 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair;
 import org.apache.hadoop.classification.InterfaceAudience.Public;
 import org.apache.hadoop.classification.InterfaceStability.Evolving;
 import org.apache.hadoop.util.ReflectionUtils;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.records.Resource;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.DominantResourceFairnessPolicy;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.policies.FairSharePolicy;
@@ -72,7 +73,7 @@ public abstract class SchedulingPolicy {
       throws AllocationConfigurationException {
     @SuppressWarnings("rawtypes")
     Class clazz;
-    String text = policy.toLowerCase();
+    String text = StringUtils.toLowerCase(policy);
     if (text.equalsIgnoreCase(FairSharePolicy.NAME)) {
       clazz = FairSharePolicy.class;
     } else if (text.equalsIgnoreCase(FifoPolicy.NAME)) {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
index f28a9a8..13e0835 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/NodesPage.java
@@ -77,7 +77,7 @@ class NodesPage extends RmView {
               .th(".nodeManagerVersion", "Version")._()._().tbody();
       NodeState stateFilter = null;
       if (type != null && !type.isEmpty()) {
-        stateFilter = NodeState.valueOf(type.toUpperCase());
+        stateFilter = NodeState.valueOf(StringUtils.toUpperCase(type));
       }
       Collection<RMNode> rmNodes = this.rm.getRMContext().getRMNodes().values();
       boolean isInactive = false;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d1c6accb/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
index f8836d5..059ea09 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
@@ -66,6 +66,7 @@ import org.apache.hadoop.security.authorize.AuthorizationException;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.token.TokenIdentifier;
 import org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.yarn.api.protocolrecords.GetApplicationsRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationRequest;
 import org.apache.hadoop.yarn.api.protocolrecords.GetNewApplicationResponse;
@@ -257,7 +258,8 @@ public class RMWebServices {
     } else {
       acceptedStates = EnumSet.noneOf(NodeState.class);
       for (String stateStr : states.split(",")) {
-        acceptedStates.add(NodeState.valueOf(stateStr.toUpperCase()));
+        acceptedStates.add(
+            NodeState.valueOf(StringUtils.toUpperCase(stateStr)));
       }
     }
     
@@ -506,7 +508,7 @@ public class RMWebServices {
     // if no states, returns the counts of all RMAppStates
     if (states.size() == 0) {
       for (YarnApplicationState state : YarnApplicationState.values()) {
-        states.add(state.toString().toLowerCase());
+        states.add(StringUtils.toLowerCase(state.toString()));
       }
     }
     // in case we extend to multiple applicationTypes in the future
@@ -518,8 +520,9 @@ public class RMWebServices {
     ConcurrentMap<ApplicationId, RMApp> apps = rm.getRMContext().getRMApps();
     for (RMApp rmapp : apps.values()) {
       YarnApplicationState state = rmapp.createApplicationState();
-      String type = rmapp.getApplicationType().trim().toLowerCase();
-      if (states.contains(state.toString().toLowerCase())) {
+      String type = StringUtils.toLowerCase(rmapp.getApplicationType().trim());
+      if (states.contains(
+          StringUtils.toLowerCase(state.toString()))) {
         if (types.contains(ANY)) {
           countApp(scoreboard, state, ANY);
         } else if (types.contains(type)) {
@@ -554,7 +557,8 @@ public class RMWebServices {
               if (isState) {
                 try {
                   // enum string is in the uppercase
-                  YarnApplicationState.valueOf(paramStr.trim().toUpperCase());
+                  YarnApplicationState.valueOf(
+                      StringUtils.toUpperCase(paramStr.trim()));
                 } catch (RuntimeException e) {
                   YarnApplicationState[] stateArray =
                       YarnApplicationState.values();
@@ -564,7 +568,8 @@ public class RMWebServices {
                       + " specified. It should be one of " + allAppStates);
                 }
               }
-              params.add(paramStr.trim().toLowerCase());
+              params.add(
+                  StringUtils.toLowerCase(paramStr.trim()));
             }
           }
         }
@@ -582,7 +587,8 @@ public class RMWebServices {
     for (String state : states) {
       Map<String, Long> partScoreboard = new HashMap<String, Long>();
       scoreboard.put(
-          YarnApplicationState.valueOf(state.toUpperCase()), partScoreboard);
+          YarnApplicationState.valueOf(StringUtils.toUpperCase(state)),
+          partScoreboard);
       // types is verified no to be empty
       for (String type : types) {
         partScoreboard.put(type, 0L);


[15/43] hadoop git commit: MAPREDUCE-5653. DistCp does not honour config-overrides for mapreduce.[map, reduce].memory.mb (Ratandeep Ratti via aw)

Posted by zj...@apache.org.
MAPREDUCE-5653. DistCp does not honour config-overrides for mapreduce.[map,reduce].memory.mb (Ratandeep Ratti via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/039366e3
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/039366e3
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/039366e3

Branch: refs/heads/YARN-2928
Commit: 039366e3b430ff7d9a7ff30405a0431292069a8a
Parents: 915bec3
Author: Allen Wittenauer <aw...@apache.org>
Authored: Sat Feb 28 22:53:38 2015 -0800
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Sat Feb 28 22:53:38 2015 -0800

----------------------------------------------------------------------
 hadoop-mapreduce-project/CHANGES.txt                      |  3 +++
 .../hadoop-distcp/src/main/resources/distcp-default.xml   | 10 ----------
 2 files changed, 3 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/039366e3/hadoop-mapreduce-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt
index f509d4e..ccd24a6 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -6,6 +6,9 @@ Trunk (Unreleased)
     MAPREDUCE-5785. Derive heap size or mapreduce.*.memory.mb automatically.
     (Gera Shegalov and Karthik Kambatla via gera)
 
+    MAPREDUCE-5653. DistCp does not honour config-overrides for
+    mapreduce.[map,reduce].memory.mb (Ratandeep Ratti via aw)
+
   NEW FEATURES
 
     MAPREDUCE-778. Rumen Anonymizer. (Amar Kamat and Chris Douglas via amarrk)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/039366e3/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml b/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
index f50dddd..6e1154e 100644
--- a/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
+++ b/hadoop-tools/hadoop-distcp/src/main/resources/distcp-default.xml
@@ -32,16 +32,6 @@
     </property>
 
     <property>
-        <name>mapred.job.map.memory.mb</name>
-        <value>1024</value>
-    </property>
-
-    <property>
-        <name>mapred.job.reduce.memory.mb</name>
-        <value>1024</value>
-    </property>
-
-    <property>
         <name>mapred.reducer.new-api</name>
         <value>true</value>
     </property>


[42/43] hadoop git commit: YARN-3210. Refactored timeline aggregator according to new code organization proposed in YARN-3166. Contributed by Li Lu.

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeAggregatorServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeAggregatorServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeAggregatorServer.java
deleted file mode 100644
index 902047d..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeAggregatorServer.java
+++ /dev/null
@@ -1,149 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-import static org.mockito.Mockito.doReturn;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.spy;
-import static org.mockito.Mockito.when;
-
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.util.ExitUtil;
-import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.server.api.ContainerInitializationContext;
-import org.apache.hadoop.yarn.server.api.ContainerTerminationContext;
-import org.junit.Test;
-
-public class TestPerNodeAggregatorServer {
-  private ApplicationAttemptId appAttemptId;
-
-  public TestPerNodeAggregatorServer() {
-    ApplicationId appId =
-        ApplicationId.newInstance(System.currentTimeMillis(), 1);
-    appAttemptId = ApplicationAttemptId.newInstance(appId, 1);
-  }
-
-  @Test
-  public void testAddApplication() throws Exception {
-    PerNodeAggregatorServer aggregator = createAggregatorAndAddApplication();
-    // aggregator should have a single app
-    assertTrue(aggregator.hasApplication(
-        appAttemptId.getApplicationId().toString()));
-    aggregator.close();
-  }
-
-  @Test
-  public void testAddApplicationNonAMContainer() throws Exception {
-    PerNodeAggregatorServer aggregator = createAggregator();
-
-    ContainerId containerId = getContainerId(2L); // not an AM
-    ContainerInitializationContext context =
-        mock(ContainerInitializationContext.class);
-    when(context.getContainerId()).thenReturn(containerId);
-    aggregator.initializeContainer(context);
-    // aggregator should not have that app
-    assertFalse(aggregator.hasApplication(
-        appAttemptId.getApplicationId().toString()));
-  }
-
-  @Test
-  public void testRemoveApplication() throws Exception {
-    PerNodeAggregatorServer aggregator = createAggregatorAndAddApplication();
-    // aggregator should have a single app
-    String appIdStr = appAttemptId.getApplicationId().toString();
-    assertTrue(aggregator.hasApplication(appIdStr));
-
-    ContainerId containerId = getAMContainerId();
-    ContainerTerminationContext context =
-        mock(ContainerTerminationContext.class);
-    when(context.getContainerId()).thenReturn(containerId);
-    aggregator.stopContainer(context);
-    // aggregator should not have that app
-    assertFalse(aggregator.hasApplication(appIdStr));
-    aggregator.close();
-  }
-
-  @Test
-  public void testRemoveApplicationNonAMContainer() throws Exception {
-    PerNodeAggregatorServer aggregator = createAggregatorAndAddApplication();
-    // aggregator should have a single app
-    String appIdStr = appAttemptId.getApplicationId().toString();
-    assertTrue(aggregator.hasApplication(appIdStr));
-
-    ContainerId containerId = getContainerId(2L); // not an AM
-    ContainerTerminationContext context =
-        mock(ContainerTerminationContext.class);
-    when(context.getContainerId()).thenReturn(containerId);
-    aggregator.stopContainer(context);
-    // aggregator should still have that app
-    assertTrue(aggregator.hasApplication(appIdStr));
-    aggregator.close();
-  }
-
-  @Test(timeout = 60000)
-  public void testLaunch() throws Exception {
-    ExitUtil.disableSystemExit();
-    PerNodeAggregatorServer server = null;
-    try {
-      server =
-          PerNodeAggregatorServer.launchServer(new String[0]);
-    } catch (ExitUtil.ExitException e) {
-      assertEquals(0, e.status);
-      ExitUtil.resetFirstExitException();
-      fail();
-    } finally {
-      if (server != null) {
-        server.stop();
-      }
-    }
-  }
-
-  private PerNodeAggregatorServer createAggregatorAndAddApplication() {
-    PerNodeAggregatorServer aggregator = createAggregator();
-    // create an AM container
-    ContainerId containerId = getAMContainerId();
-    ContainerInitializationContext context =
-        mock(ContainerInitializationContext.class);
-    when(context.getContainerId()).thenReturn(containerId);
-    aggregator.initializeContainer(context);
-    return aggregator;
-  }
-
-  private PerNodeAggregatorServer createAggregator() {
-    AppLevelServiceManager serviceManager = spy(new AppLevelServiceManager());
-    doReturn(new Configuration()).when(serviceManager).getConfig();
-    PerNodeAggregatorServer aggregator =
-        spy(new PerNodeAggregatorServer(serviceManager));
-    return aggregator;
-  }
-
-  private ContainerId getAMContainerId() {
-    return getContainerId(1L);
-  }
-
-  private ContainerId getContainerId(long id) {
-    return ContainerId.newContainerId(appAttemptId, id);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeTimelineAggregatorsAuxService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeTimelineAggregatorsAuxService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeTimelineAggregatorsAuxService.java
new file mode 100644
index 0000000..1c89ead
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestPerNodeTimelineAggregatorsAuxService.java
@@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.spy;
+import static org.mockito.Mockito.when;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.server.api.ContainerInitializationContext;
+import org.apache.hadoop.yarn.server.api.ContainerTerminationContext;
+import org.junit.Test;
+
+public class TestPerNodeTimelineAggregatorsAuxService {
+  private ApplicationAttemptId appAttemptId;
+
+  public TestPerNodeTimelineAggregatorsAuxService() {
+    ApplicationId appId =
+        ApplicationId.newInstance(System.currentTimeMillis(), 1);
+    appAttemptId = ApplicationAttemptId.newInstance(appId, 1);
+  }
+
+  @Test
+  public void testAddApplication() throws Exception {
+    PerNodeTimelineAggregatorsAuxService auxService = createAggregatorAndAddApplication();
+    // auxService should have a single app
+    assertTrue(auxService.hasApplication(
+        appAttemptId.getApplicationId().toString()));
+    auxService.close();
+  }
+
+  @Test
+  public void testAddApplicationNonAMContainer() throws Exception {
+    PerNodeTimelineAggregatorsAuxService auxService = createAggregator();
+
+    ContainerId containerId = getContainerId(2L); // not an AM
+    ContainerInitializationContext context =
+        mock(ContainerInitializationContext.class);
+    when(context.getContainerId()).thenReturn(containerId);
+    auxService.initializeContainer(context);
+    // auxService should not have that app
+    assertFalse(auxService.hasApplication(
+        appAttemptId.getApplicationId().toString()));
+  }
+
+  @Test
+  public void testRemoveApplication() throws Exception {
+    PerNodeTimelineAggregatorsAuxService auxService = createAggregatorAndAddApplication();
+    // auxService should have a single app
+    String appIdStr = appAttemptId.getApplicationId().toString();
+    assertTrue(auxService.hasApplication(appIdStr));
+
+    ContainerId containerId = getAMContainerId();
+    ContainerTerminationContext context =
+        mock(ContainerTerminationContext.class);
+    when(context.getContainerId()).thenReturn(containerId);
+    auxService.stopContainer(context);
+    // auxService should not have that app
+    assertFalse(auxService.hasApplication(appIdStr));
+    auxService.close();
+  }
+
+  @Test
+  public void testRemoveApplicationNonAMContainer() throws Exception {
+    PerNodeTimelineAggregatorsAuxService auxService = createAggregatorAndAddApplication();
+    // auxService should have a single app
+    String appIdStr = appAttemptId.getApplicationId().toString();
+    assertTrue(auxService.hasApplication(appIdStr));
+
+    ContainerId containerId = getContainerId(2L); // not an AM
+    ContainerTerminationContext context =
+        mock(ContainerTerminationContext.class);
+    when(context.getContainerId()).thenReturn(containerId);
+    auxService.stopContainer(context);
+    // auxService should still have that app
+    assertTrue(auxService.hasApplication(appIdStr));
+    auxService.close();
+  }
+
+  @Test(timeout = 60000)
+  public void testLaunch() throws Exception {
+    ExitUtil.disableSystemExit();
+    PerNodeTimelineAggregatorsAuxService auxService = null;
+    try {
+      auxService =
+          PerNodeTimelineAggregatorsAuxService.launchServer(new String[0]);
+    } catch (ExitUtil.ExitException e) {
+      assertEquals(0, e.status);
+      ExitUtil.resetFirstExitException();
+      fail();
+    } finally {
+      if (auxService != null) {
+        auxService.stop();
+      }
+    }
+  }
+
+  private PerNodeTimelineAggregatorsAuxService createAggregatorAndAddApplication() {
+    PerNodeTimelineAggregatorsAuxService auxService = createAggregator();
+    // create an AM container
+    ContainerId containerId = getAMContainerId();
+    ContainerInitializationContext context =
+        mock(ContainerInitializationContext.class);
+    when(context.getContainerId()).thenReturn(containerId);
+    auxService.initializeContainer(context);
+    return auxService;
+  }
+
+  private PerNodeTimelineAggregatorsAuxService createAggregator() {
+    TimelineAggregatorsCollection
+        aggregatorsCollection = spy(new TimelineAggregatorsCollection());
+    doReturn(new Configuration()).when(aggregatorsCollection).getConfig();
+    PerNodeTimelineAggregatorsAuxService auxService =
+        spy(new PerNodeTimelineAggregatorsAuxService(aggregatorsCollection));
+    return auxService;
+  }
+
+  private ContainerId getAMContainerId() {
+    return getContainerId(1L);
+  }
+
+  private ContainerId getContainerId(long id) {
+    return ContainerId.newContainerId(appAttemptId, id);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregator.java
new file mode 100644
index 0000000..821e455
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregator.java
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+public class TestTimelineAggregator {
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
new file mode 100644
index 0000000..cec1d71
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestTimelineAggregatorsCollection.java
@@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.spy;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider;
+import org.apache.hadoop.conf.Configuration;
+import org.junit.Test;
+
+public class TestTimelineAggregatorsCollection {
+
+  @Test(timeout=60000)
+  public void testMultithreadedAdd() throws Exception {
+    final TimelineAggregatorsCollection aggregatorCollection =
+        spy(new TimelineAggregatorsCollection());
+    doReturn(new Configuration()).when(aggregatorCollection).getConfig();
+
+    final int NUM_APPS = 5;
+    List<Callable<Boolean>> tasks = new ArrayList<Callable<Boolean>>();
+    for (int i = 0; i < NUM_APPS; i++) {
+      final String appId = String.valueOf(i);
+      Callable<Boolean> task = new Callable<Boolean>() {
+        public Boolean call() {
+          AppLevelTimelineAggregator aggregator =
+              new AppLevelTimelineAggregator(appId);
+          return (aggregatorCollection.putIfAbsent(appId, aggregator) == aggregator);
+        }
+      };
+      tasks.add(task);
+    }
+    ExecutorService executor = Executors.newFixedThreadPool(NUM_APPS);
+    try {
+      List<Future<Boolean>> futures = executor.invokeAll(tasks);
+      for (Future<Boolean> future: futures) {
+        assertTrue(future.get());
+      }
+    } finally {
+      executor.shutdownNow();
+    }
+    // check the keys
+    for (int i = 0; i < NUM_APPS; i++) {
+      assertTrue(aggregatorCollection.containsKey(String.valueOf(i)));
+    }
+  }
+
+  @Test
+  public void testMultithreadedAddAndRemove() throws Exception {
+    final TimelineAggregatorsCollection aggregatorCollection =
+        spy(new TimelineAggregatorsCollection());
+    doReturn(new Configuration()).when(aggregatorCollection).getConfig();
+
+    final int NUM_APPS = 5;
+    List<Callable<Boolean>> tasks = new ArrayList<Callable<Boolean>>();
+    for (int i = 0; i < NUM_APPS; i++) {
+      final String appId = String.valueOf(i);
+      Callable<Boolean> task = new Callable<Boolean>() {
+        public Boolean call() {
+          AppLevelTimelineAggregator aggregator =
+              new AppLevelTimelineAggregator(appId);
+          boolean successPut =
+              (aggregatorCollection.putIfAbsent(appId, aggregator) == aggregator);
+          return successPut && aggregatorCollection.remove(appId);
+        }
+      };
+      tasks.add(task);
+    }
+    ExecutorService executor = Executors.newFixedThreadPool(NUM_APPS);
+    try {
+      List<Future<Boolean>> futures = executor.invokeAll(tasks);
+      for (Future<Boolean> future: futures) {
+        assertTrue(future.get());
+      }
+    } finally {
+      executor.shutdownNow();
+    }
+    // check the keys
+    for (int i = 0; i < NUM_APPS; i++) {
+      assertFalse(aggregatorCollection.containsKey(String.valueOf(i)));
+    }
+  }
+}


[05/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
new file mode 100644
index 0000000..e516afb
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md
@@ -0,0 +1,181 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+ResourceManger Restart
+======================
+
+* [Overview](#Overview)
+* [Feature](#Feature)
+* [Configurations](#Configurations)
+    * [Enable RM Restart](#Enable_RM_Restart)
+    * [Configure the state-store for persisting the RM state](#Configure_the_state-store_for_persisting_the_RM_state)
+    * [How to choose the state-store implementation](#How_to_choose_the_state-store_implementation)
+    * [Configurations for Hadoop FileSystem based state-store implementation](#Configurations_for_Hadoop_FileSystem_based_state-store_implementation)
+    * [Configurations for ZooKeeper based state-store implementation](#Configurations_for_ZooKeeper_based_state-store_implementation)
+    * [Configurations for LevelDB based state-store implementation](#Configurations_for_LevelDB_based_state-store_implementation)
+    * [Configurations for work-preserving RM recovery](#Configurations_for_work-preserving_RM_recovery)
+* [Notes](#Notes)
+* [Sample Configurations](#Sample_Configurations)
+
+Overview
+--------
+
+ResourceManager is the central authority that manages resources and schedules applications running atop of YARN. Hence, it is potentially a single point of failure in a Apache YARN cluster.
+`
+This document gives an overview of ResourceManager Restart, a feature that enhances ResourceManager to keep functioning across restarts and also makes ResourceManager down-time invisible to end-users.
+
+ResourceManager Restart feature is divided into two phases: 
+
+* **ResourceManager Restart Phase 1 (Non-work-preserving RM restart)**: Enhance RM to persist application/attempt state and other credentials information in a pluggable state-store. RM will reload this information from state-store upon restart and re-kick the previously running applications. Users are not required to re-submit the applications.
+
+* **ResourceManager Restart Phase 2 (Work-preserving RM restart)**: Focus on re-constructing the running state of ResourceManager by combining the container statuses from NodeManagers and container requests from ApplicationMasters upon restart. The key difference from phase 1 is that previously running applications will not be killed after RM restarts, and so applications won't lose its work because of RM outage.
+
+Feature
+-------
+
+* **Phase 1: Non-work-preserving RM restart** 
+
+     As of Hadoop 2.4.0 release, only ResourceManager Restart Phase 1 is implemented which is described below.
+
+     The overall concept is that RM will persist the application metadata (i.e. ApplicationSubmissionContext) in a pluggable state-store when client submits an application and also saves the final status of the application such as the completion state (failed, killed, finished) and diagnostics when the application completes. Besides, RM also saves the credentials like security keys, tokens to work in a secure  environment. Any time RM shuts down, as long as the required information (i.e.application metadata and the alongside credentials if running in a secure environment) is available in the state-store, when RM restarts, it can pick up the application metadata from the state-store and re-submit the application. RM won't re-submit the applications if they were already completed (i.e. failed, killed, finished) before RM went down.
+
+     NodeManagers and clients during the down-time of RM will keep polling RM until RM comes up. When RM becomes alive, it will send a re-sync command to all the NodeManagers and ApplicationMasters it was talking to via heartbeats. As of Hadoop 2.4.0 release, the behaviors for NodeManagers and ApplicationMasters to handle this command are: NMs will kill all its managed containers and re-register with RM. From the RM's perspective, these re-registered NodeManagers are similar to the newly joining NMs. AMs(e.g. MapReduce AM) are expected to shutdown when they receive the re-sync command. After RM restarts and loads all the application metadata, credentials from state-store and populates them into memory, it will create a new attempt (i.e. ApplicationMaster) for each application that was not yet completed and re-kick that application as usual. As described before, the previously running applications' work is lost in this manner since they are essentially killed by RM via the re-sync co
 mmand on restart.
+
+* **Phase 2: Work-preserving RM restart** 
+
+     As of Hadoop 2.6.0, we further enhanced RM restart feature to address the problem to not kill any applications running on YARN cluster if RM restarts.
+
+     Beyond all the groundwork that has been done in Phase 1 to ensure the persistency of application state and reload that state on recovery, Phase 2 primarily focuses on re-constructing the entire running state of YARN cluster, the majority of which is the state of the central scheduler inside RM which keeps track of all containers' life-cycle, applications' headroom and resource requests, queues' resource usage etc. In this way, RM doesn't need to kill the AM and re-run the application from scratch as it is done in Phase 1. Applications can simply re-sync back with RM and resume from where it were left off.
+
+     RM recovers its runing state by taking advantage of the container statuses sent from all NMs. NM will not kill the containers when it re-syncs with the restarted RM. It continues managing the containers and send the container statuses across to RM when it re-registers. RM reconstructs the container instances and the associated applications' scheduling status by absorbing these containers' information. In the meantime, AM needs to re-send the outstanding resource requests to RM because RM may lose the unfulfilled requests when it shuts down. Application writers using AMRMClient library to communicate with RM do not need to worry about the part of AM re-sending resource requests to RM on re-sync, as it is automatically taken care by the library itself.
+
+Configurations
+--------------
+
+This section describes the configurations involved to enable RM Restart feature.
+
+### Enable RM Restart
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.recovery.enabled` | `true` |
+
+### Configure the state-store for persisting the RM state
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.store.class` | The class name of the state-store to be used for saving application/attempt state and the credentials. The available state-store implementations are `org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore`, a ZooKeeper based state-store implementation and `org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore`, a Hadoop FileSystem based state-store implementation like HDFS and local FS. `org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore`, a LevelDB based state-store implementation. The default value is set to `org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore`. |
+
+### How to choose the state-store implementation
+
+   * **ZooKeeper based state-store**: User is free to pick up any storage to set up RM restart, but must use ZooKeeper based state-store to support RM HA. The reason is that only ZooKeeper based state-store supports fencing mechanism to avoid a split-brain situation where multiple RMs assume they are active and can edit the state-store at the same time.
+
+   * **FileSystem based state-store**: HDFS and local FS based state-store are supported. Fencing mechanism is not supported.
+
+   * **LevelDB based state-store**: LevelDB based state-store is considered more light weight than HDFS and ZooKeeper based state-store. LevelDB supports better atomic operations, fewer I/O ops per state update,
+    and far fewer total files on the filesystem. Fencing mechanism is not supported.
+
+### Configurations for Hadoop FileSystem based state-store implementation
+
+   Support both HDFS and local FS based state-store implementation. The type of file system to be used is determined by the scheme of URI. e.g. `hdfs://localhost:9000/rmstore` uses HDFS as the storage and `file:///tmp/yarn/rmstore` uses local FS as the storage. If no scheme(`hdfs://` or `file://`) is specified in the URI, the type of storage to be used is determined by `fs.defaultFS` defined in `core-site.xml`.
+
+* Configure the URI where the RM state will be saved in the Hadoop FileSystem state-store.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.fs.state-store.uri` | URI pointing to the location of the FileSystem path where RM state will be stored (e.g. hdfs://localhost:9000/rmstore). Default value is `${hadoop.tmp.dir}/yarn/system/rmstore`. If FileSystem name is not provided, `fs.default.name` specified in **conf/core-site.xml* will be used. |
+
+* Configure the retry policy state-store client uses to connect with the Hadoop FileSystem.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.fs.state-store.retry-policy-spec` | Hadoop FileSystem client retry policy specification. Hadoop FileSystem client retry is always enabled. Specified in pairs of sleep-time and number-of-retries i.e. (t0, n0), (t1, n1), ..., the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. Default value is (2000, 500) |
+
+### Configurations for ZooKeeper based state-store implementation
+  
+* Configure the ZooKeeper server address and the root path where the RM state is stored.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.zk-address` | Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server (e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") to be used by the RM for storing RM state. |
+| `yarn.resourcemanager.zk-state-store.parent-path` | The full path of the root znode where RM state will be stored. Default value is /rmstore. |
+
+* Configure the retry policy state-store client uses to connect with the ZooKeeper server.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.zk-num-retries` | Number of times RM tries to connect to ZooKeeper server if the connection is lost. Default value is 500. |
+| `yarn.resourcemanager.zk-retry-interval-ms` | The interval in milliseconds between retries when connecting to a ZooKeeper server. Default value is 2 seconds. |
+| `yarn.resourcemanager.zk-timeout-ms` | ZooKeeper session timeout in milliseconds. This configuration is used by the ZooKeeper server to determine when the session expires. Session expiration happens when the server does not hear from the client (i.e. no heartbeat) within the session timeout period specified by this configuration. Default value is 10 seconds |
+
+* Configure the ACLs to be used for setting permissions on ZooKeeper znodes.
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.zk-acl` | ACLs to be used for setting permissions on ZooKeeper znodes. Default value is `world:anyone:rwcda` |
+
+### Configurations for LevelDB based state-store implementation
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.leveldb-state-store.path` | Local path where the RM state will be stored. Default value is `${hadoop.tmp.dir}/yarn/system/rmstore` |
+
+### Configurations for work-preserving RM recovery
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms` | Set the amount of time RM waits before allocating new containers on RM work-preserving recovery. Such wait period gives RM a chance to settle down resyncing with NMs in the cluster on recovery, before assigning new containers to applications.|
+
+Notes
+-----
+
+ContainerId string format is changed if RM restarts with work-preserving recovery enabled. It used to be such format:
+
+    Container_{clusterTimestamp}_{appId}_{attemptId}_{containerId}, e.g. Container_1410901177871_0001_01_000005.
+
+It is now changed to:
+
+    Container_e{epoch}_{clusterTimestamp}_{appId}_{attemptId}_{containerId}, e.g. Container_e17_1410901177871_0001_01_000005.
+ 
+Here, the additional epoch number is a monotonically increasing integer which starts from 0 and is increased by 1 each time RM restarts. If epoch number is 0, it is omitted and the containerId string format stays the same as before.
+
+Sample Configurations
+---------------------
+
+Below is a minimum set of configurations for enabling RM work-preserving restart using ZooKeeper based state store.
+
+
+     <property>
+       <description>Enable RM to recover state after starting. If true, then 
+       yarn.resourcemanager.store.class must be specified</description>
+       <name>yarn.resourcemanager.recovery.enabled</name>
+       <value>true</value>
+     </property>
+   
+     <property>
+       <description>The class to use as the persistent store.</description>
+       <name>yarn.resourcemanager.store.class</name>
+       <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
+     </property>
+
+     <property>
+       <description>Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server
+       (e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002") to be used by the RM for storing RM state.
+       This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
+       as the value for yarn.resourcemanager.store.class</description>
+       <name>yarn.resourcemanager.zk-address</name>
+       <value>127.0.0.1:2181</value>
+     </property>
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/SecureContainer.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/SecureContainer.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/SecureContainer.md
new file mode 100644
index 0000000..f32e460
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/SecureContainer.md
@@ -0,0 +1,135 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+YARN Secure Containers
+======================
+
+* [Overview](#Overview)
+
+Overview
+--------
+
+YARN containers in a secure cluster use the operating system facilities to offer execution isolation for containers. Secure containers execute under the credentials of the job user. The operating system enforces access restriction for the container. The container must run as the use that submitted the application.
+
+Secure Containers work only in the context of secured YARN clusters.
+
+###Container isolation requirements
+
+  The container executor must access the local files and directories needed by the container such as jars, configuration files, log files, shared objects etc. Although it is launched by the NodeManager, the container should not have access to the NodeManager private files and configuration. Container running applications submitted by different users should be isolated and unable to access each other files and directories. Similar requirements apply to other system non-file securable objects like named pipes, critical sections, LPC queues, shared memory etc.
+
+###Linux Secure Container Executor
+
+  On Linux environment the secure container executor is the `LinuxContainerExecutor`. It uses an external program called the **container-executor**\> to launch the container. This program has the `setuid` access right flag set which allows it to launch the container with the permissions of the YARN application user.
+
+###Configuration
+
+  The configured directories for `yarn.nodemanager.local-dirs` and `yarn.nodemanager.log-dirs` must be owned by the configured NodeManager user (`yarn`) and group (`hadoop`). The permission set on these directories must be `drwxr-xr-x`.
+
+  The `container-executor` program must be owned by `root` and have the permission set `---sr-s---`.
+
+  To configure the `NodeManager` to use the `LinuxContainerExecutor` set the following in the **conf/yarn-site.xml**:
+
+```xml
+<property>
+  <name>yarn.nodemanager.container-executor.class</name>
+  <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
+</property>
+
+<property>
+  <name>yarn.nodemanager.linux-container-executor.group</name>
+  <value>hadoop</value>
+</property>
+```
+
+  Additionally the LCE requires the `container-executor.cfg` file, which is read by the `container-executor` program.
+
+```
+yarn.nodemanager.linux-container-executor.group=#configured value of yarn.nodemanager.linux-container-executor.group
+banned.users=#comma separated list of users who can not run applications
+allowed.system.users=#comma separated list of allowed system users
+min.user.id=1000#Prevent other super-users
+```
+
+###Windows Secure Container Executor (WSCE)
+
+  The Windows environment secure container executor is the `WindowsSecureContainerExecutor`. It uses the Windows S4U infrastructure to launch the container as the YARN application user. The WSCE requires the presense of the `hadoopwinutilsvc` service. This services is hosted by `%HADOOP_HOME%\bin\winutils.exe` started with the `service` command line argument. This service offers some privileged operations that require LocalSystem authority so that the NM is not required to run the entire JVM and all the NM code in an elevated context. The NM interacts with the `hadoopwintulsvc` service by means of Local RPC (LRPC) via calls JNI to the RCP client hosted in `hadoop.dll`.
+
+###Configuration
+
+  To configure the `NodeManager` to use the `WindowsSecureContainerExecutor` set the following in the **conf/yarn-site.xml**:
+
+```xml
+        <property>
+          <name>yarn.nodemanager.container-executor.class</name>
+          <value>org.apache.hadoop.yarn.server.nodemanager.WindowsSecureContainerExecutor</value>
+        </property>
+
+        <property>
+          <name>yarn.nodemanager.windows-secure-container-executor.group</name>
+          <value>yarn</value>
+        </property>
+```
+   
+  The hadoopwinutilsvc uses `%HADOOP_HOME%\etc\hadoop\wsce_site.xml` to configure access to the privileged operations.
+
+```xml
+<property>
+ <name>yarn.nodemanager.windows-secure-container-executor.impersonate.allowed</name>
+  <value>HadoopUsers</value>
+</property>
+
+<property>
+  <name>yarn.nodemanager.windows-secure-container-executor.impersonate.denied</name>
+  <value>HadoopServices,Administrators</value>
+</property>
+
+<property>
+  <name>yarn.nodemanager.windows-secure-container-executor.allowed</name>
+  <value>nodemanager</value>
+</property>
+
+<property>
+  <name>yarn.nodemanager.windows-secure-container-executor.local-dirs</name>
+  <value>nm-local-dir, nm-log-dirs</value>
+</property>
+
+<property>
+  <name>yarn.nodemanager.windows-secure-container-executor.job-name</name>
+  <value>nodemanager-job-name</value>
+</property>  
+```
+
+  `yarn.nodemanager.windows-secure-container-executor.allowed` should contain the name of the service account running the nodemanager. This user will be allowed to access the hadoopwintuilsvc functions.
+
+  `yarn.nodemanager.windows-secure-container-executor.impersonate.allowed` should contain the users that are allowed to create containers in the cluster. These users will be allowed to be impersonated by hadoopwinutilsvc.
+
+  `yarn.nodemanager.windows-secure-container-executor.impersonate.denied` should contain users that are explictly forbiden from creating containers. hadoopwinutilsvc will refuse to impersonate these users.
+
+  `yarn.nodemanager.windows-secure-container-executor.local-dirs` should contain the nodemanager local dirs. hadoopwinutilsvc will allow only file operations under these directories. This should contain the same values as `$yarn.nodemanager.local-dirs, $yarn.nodemanager.log-dirs` but note that hadoopwinutilsvc XML configuration processing does not do substitutions so the value must be the final value. All paths must be absolute and no environment variable substitution will be performed. The paths are compared LOCAL\_INVARIANT case insensitive string comparison, the file path validated must start with one of the paths listed in local-dirs configuration. Use comma as path separator:`,`
+
+  `yarn.nodemanager.windows-secure-container-executor.job-name` should contain an Windows NT job name that all containers should be added to. This configuration is optional. If not set, the container is not added to a global NodeManager job. Normally this should be set to the job that the NM is assigned to, so that killing the NM kills also all containers. Hadoopwinutilsvc will not attempt to create this job, the job must exists when the container is launched. If the value is set and the job does not exists, container launch will fail with error 2 `The system cannot find the file specified`. Note that this global NM job is not related to the container job, which always gets created for each container and is named after the container ID. This setting controls a global job that spans all containers and the parent NM, and as such it requires nested jobs. Nested jobs are available only post Windows 8 and Windows Server 2012.
+
+####Useful Links
+
+  * [Exploring S4U Kerberos Extensions in Windows Server 2003](http://msdn.microsoft.com/en-us/magazine/cc188757.aspx)
+
+  * [Nested Jobs](http://msdn.microsoft.com/en-us/library/windows/desktop/hh448388.aspx)
+
+  * [Winutils needs ability to create task as domain user](https://issues.apache.org/jira/browse/YARN-1063)
+
+  * [Implement secure Windows Container Executor](https://issues.apache.org/jira/browse/YARN-1972)
+
+  * [Remove the need to run NodeManager as privileged account for Windows Secure Container Executor](https://issues.apache.org/jira/browse/YARN-2198)
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
new file mode 100644
index 0000000..4889936
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServer.md
@@ -0,0 +1,231 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+YARN Timeline Server
+====================
+
+* [Overview](#Overview)
+* [Current Status](#Current_Status)
+* [Basic Configuration](#Basic_Configuration)
+* [Advanced Configuration](#Advanced_Configuration)
+* [Generic-data related Configuration](#Generic-data_related_Configuration)
+* [Per-framework-date related Configuration](#Per-framework-date_related_Configuration)
+* [Running Timeline server](#Running_Timeline_server)
+* [Accessing generic-data via command-line](#Accessing_generic-data_via_command-line)
+* [Publishing of per-framework data by applications](#Publishing_of_per-framework_data_by_applications)
+
+Overview
+--------
+
+Storage and retrieval of applications' current as well as historic information in a generic fashion is solved in YARN through the Timeline Server (previously also called Generic Application History Server). This serves two responsibilities:
+
+* Generic information about completed applications
+    
+    Generic information includes application level data like queue-name, user information etc in the ApplicationSubmissionContext, list of application-attempts that ran for an application, information about each application-attempt, list of containers run under each application-attempt, and information about each container. Generic data is stored by ResourceManager to a history-store (default implementation on a file-system) and used by the web-UI to display information about completed applications.
+
+* Per-framework information of running and completed applications
+    
+    Per-framework information is completely specific to an application or framework. For example, Hadoop MapReduce framework can include pieces of information like number of map tasks, reduce tasks, counters etc. Application developers can publish the specific information to the Timeline server via TimelineClient from within a client, the ApplicationMaster and/or the application's containers. This information is then queryable via REST APIs for rendering by application/framework specific UIs.
+
+Current Status
+--------------
+
+Timeline sever is a work in progress. The basic storage and retrieval of information, both generic and framework specific, are in place. Timeline server doesn't work in secure mode yet. The generic information and the per-framework information are today collected and presented separately and thus are not integrated well together. Finally, the per-framework information is only available via RESTful APIs, using JSON type content - ability to install framework specific UIs in YARN isn't supported yet.
+
+Basic Configuration
+-------------------
+
+Users need to configure the Timeline server before starting it. The simplest configuration you should add in `yarn-site.xml` is to set the hostname of the Timeline server.
+
+```xml
+<property>
+  <description>The hostname of the Timeline service web application.</description>
+  <name>yarn.timeline-service.hostname</name>
+  <value>0.0.0.0</value>
+</property>
+```
+
+Advanced Configuration
+----------------------
+
+In addition to the hostname, admins can also configure whether the service is enabled or not, the ports of the RPC and the web interfaces, and the number of RPC handler threads.
+
+```xml
+<property>
+  <description>Address for the Timeline server to start the RPC server.</description>
+  <name>yarn.timeline-service.address</name>
+  <value>${yarn.timeline-service.hostname}:10200</value>
+</property>
+
+<property>
+  <description>The http address of the Timeline service web application.</description>
+  <name>yarn.timeline-service.webapp.address</name>
+  <value>${yarn.timeline-service.hostname}:8188</value>
+</property>
+
+<property>
+  <description>The https address of the Timeline service web application.</description>
+  <name>yarn.timeline-service.webapp.https.address</name>
+  <value>${yarn.timeline-service.hostname}:8190</value>
+</property>
+
+<property>
+  <description>Handler thread count to serve the client RPC requests.</description>
+  <name>yarn.timeline-service.handler-thread-count</name>
+  <value>10</value>
+</property>
+
+<property>
+  <description>Enables cross-origin support (CORS) for web services where
+  cross-origin web response headers are needed. For example, javascript making
+  a web services request to the timeline server.</description>
+  <name>yarn.timeline-service.http-cross-origin.enabled</name>
+  <value>false</value>
+</property>
+
+<property>
+  <description>Comma separated list of origins that are allowed for web
+  services needing cross-origin (CORS) support. Wildcards (*) and patterns
+  allowed</description>
+  <name>yarn.timeline-service.http-cross-origin.allowed-origins</name>
+  <value>*</value>
+</property>
+
+<property>
+  <description>Comma separated list of methods that are allowed for web
+  services needing cross-origin (CORS) support.</description>
+  <name>yarn.timeline-service.http-cross-origin.allowed-methods</name>
+  <value>GET,POST,HEAD</value>
+</property>
+
+<property>
+  <description>Comma separated list of headers that are allowed for web
+  services needing cross-origin (CORS) support.</description>
+  <name>yarn.timeline-service.http-cross-origin.allowed-headers</name>
+  <value>X-Requested-With,Content-Type,Accept,Origin</value>
+</property>
+
+<property>
+  <description>The number of seconds a pre-flighted request can be cached
+  for web services needing cross-origin (CORS) support.</description>
+  <name>yarn.timeline-service.http-cross-origin.max-age</name>
+  <value>1800</value>
+</property>
+```
+
+Generic-data related Configuration
+----------------------------------
+
+Users can specify whether the generic data collection is enabled or not, and also choose the storage-implementation class for the generic data. There are more configurations related to generic data collection, and users can refer to `yarn-default.xml` for all of them.
+
+```xml
+<property>
+  <description>Indicate to ResourceManager as well as clients whether
+  history-service is enabled or not. If enabled, ResourceManager starts
+  recording historical data that Timelien service can consume. Similarly,
+  clients can redirect to the history service when applications
+  finish if this is enabled.</description>
+  <name>yarn.timeline-service.generic-application-history.enabled</name>
+  <value>false</value>
+</property>
+
+<property>
+  <description>Store class name for history store, defaulting to file system
+  store</description>
+  <name>yarn.timeline-service.generic-application-history.store-class</name>
+  <value>org.apache.hadoop.yarn.server.applicationhistoryservice.FileSystemApplicationHistoryStore</value>
+</property>
+```
+
+Per-framework-date related Configuration
+----------------------------------------
+
+Users can specify whether per-framework data service is enabled or not, choose the store implementation for the per-framework data, and tune the retention of the per-framework data. There are more configurations related to per-framework data service, and users can refer to `yarn-default.xml` for all of them.
+
+```xml
+<property>
+  <description>Indicate to clients whether Timeline service is enabled or not.
+  If enabled, the TimelineClient library used by end-users will post entities
+  and events to the Timeline server.</description>
+  <name>yarn.timeline-service.enabled</name>
+  <value>true</value>
+</property>
+
+<property>
+  <description>Store class name for timeline store.</description>
+  <name>yarn.timeline-service.store-class</name>
+  <value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
+</property>
+
+<property>
+  <description>Enable age off of timeline store data.</description>
+  <name>yarn.timeline-service.ttl-enable</name>
+  <value>true</value>
+</property>
+
+<property>
+  <description>Time to live for timeline store data in milliseconds.</description>
+  <name>yarn.timeline-service.ttl-ms</name>
+  <value>604800000</value>
+</property>
+```
+
+Running Timeline server
+-----------------------
+
+Assuming all the aforementioned configurations are set properly, admins can start the Timeline server/history service with the following command:
+
+      $ yarn timelineserver
+
+Or users can start the Timeline server / history service as a daemon:
+
+      $ yarn --daemon start timelineserver
+
+Accessing generic-data via command-line
+---------------------------------------
+
+Users can access applications' generic historic data via the command line as below. Note that the same commands are usable to obtain the corresponding information about running applications.
+
+```
+      $ yarn application -status <Application ID>
+      $ yarn applicationattempt -list <Application ID>
+      $ yarn applicationattempt -status <Application Attempt ID>
+      $ yarn container -list <Application Attempt ID>
+      $ yarn container -status <Container ID>
+```
+
+Publishing of per-framework data by applications
+------------------------------------------------
+
+Developers can define what information they want to record for their applications by composing `TimelineEntity` and `TimelineEvent` objects, and put the entities and events to the Timeline server via `TimelineClient`. Following is an example:
+
+```java
+// Create and start the Timeline client
+TimelineClient client = TimelineClient.createTimelineClient();
+client.init(conf);
+client.start();
+
+TimelineEntity entity = null;
+// Compose the entity
+try {
+  TimelinePutResponse response = client.putEntities(entity);
+} catch (IOException e) {
+  // Handle the exception
+} catch (YarnException e) {
+  // Handle the exception
+}
+
+// Stop the Timeline client
+client.stop();
+```

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebApplicationProxy.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebApplicationProxy.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebApplicationProxy.md
new file mode 100644
index 0000000..8d6187d
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebApplicationProxy.md
@@ -0,0 +1,24 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Web Application Proxy
+=====================
+
+The Web Application Proxy is part of YARN. By default it will run as part of the Resource Manager(RM), but can be configured to run in stand alone mode. The reason for the proxy is to reduce the possibility of web based attacks through YARN.
+
+In YARN the Application Master(AM) has the responsibility to provide a web UI and to send that link to the RM. This opens up a number of potential issues. The RM runs as a trusted user, and people visiting that web address will treat it, and links it provides to them as trusted, when in reality the AM is running as a non-trusted user, and the links it gives to the RM could point to anything malicious or otherwise. The Web Application Proxy mitigates this risk by warning users that do not own the given application that they are connecting to an untrusted site.
+
+In addition to this the proxy also tries to reduce the impact that a malicious AM could have on a user. It primarily does this by stripping out cookies from the user, and replacing them with a single cookie providing the user name of the logged in user. This is because most web based authentication systems will identify a user based off of a cookie. By providing this cookie to an untrusted application it opens up the potential for an exploit. If the cookie is designed properly that potential should be fairly minimal, but this is just to reduce that potential attack vector. The current proxy implementation does nothing to prevent the AM from providing links to malicious external sites, nor does it do anything to prevent malicious javascript code from running as well. In fact javascript can be used to get the cookies, so stripping the cookies from the request has minimal benefit at this time.
+
+In the future we hope to address the attack vectors described above and make attaching to an AM's web UI safer.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebServicesIntro.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebServicesIntro.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebServicesIntro.md
new file mode 100644
index 0000000..0e89a50
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/WebServicesIntro.md
@@ -0,0 +1,569 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop YARN - Introduction to the web services REST API's
+==========================================================
+
+* [Overview](#Overview)
+* [URI's](#URIs)
+* [HTTP Requests](#HTTP_Requests)
+    * [Summary of HTTP operations](#Summary_of_HTTP_operations)
+    * [Security](#Security)
+    * [Headers Supported](#Headers_Supported)
+* [HTTP Responses](#HTTP_Responses)
+    * [Compression](#Compression)
+    * [Response Formats](#Response_Formats)
+    * [Response Errors](#Response_Errors)
+    * [Response Examples](#Response_Examples)
+* [Sample Usage](#Sample_Usage)
+
+Overview
+--------
+
+The Hadoop YARN web service REST APIs are a set of URI resources that give access to the cluster, nodes, applications, and application historical information. The URI resources are grouped into APIs based on the type of information returned. Some URI resources return collections while others return singletons.
+
+URI's
+-----
+
+The URIs for the REST-based Web services have the following syntax:
+
+      http://{http address of service}/ws/{version}/{resourcepath}
+
+The elements in this syntax are as follows:
+
+      {http address of service} - The http address of the service to get information about. 
+                                  Currently supported are the ResourceManager, NodeManager, 
+                                  MapReduce application master, and history server.
+      {version} - The version of the APIs. In this release, the version is v1.
+      {resourcepath} - A path that defines a singleton resource or a collection of resources. 
+
+HTTP Requests
+-------------
+
+To invoke a REST API, your application calls an HTTP operation on the URI associated with a resource.
+
+### Summary of HTTP operations
+
+Currently only GET is supported. It retrieves information about the resource specified.
+
+### Security
+
+The web service REST API's go through the same security as the web UI. If your cluster adminstrators have filters enabled you must authenticate via the mechanism they specified.
+
+### Headers Supported
+
+      * Accept 
+      * Accept-Encoding
+
+Currently the only fields used in the header is `Accept` and `Accept-Encoding`. `Accept` currently supports XML and JSON for the response type you accept. `Accept-Encoding` currently supports only gzip format and will return gzip compressed output if this is specified, otherwise output is uncompressed. All other header fields are ignored.
+
+HTTP Responses
+--------------
+
+The next few sections describe some of the syntax and other details of the HTTP Responses of the web service REST APIs.
+
+### Compression
+
+This release supports gzip compression if you specify gzip in the Accept-Encoding header of the HTTP request (Accept-Encoding: gzip).
+
+### Response Formats
+
+This release of the web service REST APIs supports responses in JSON and XML formats. JSON is the default. To set the response format, you can specify the format in the Accept header of the HTTP request.
+
+As specified in HTTP Response Codes, the response body can contain the data that represents the resource or an error message. In the case of success, the response body is in the selected format, either JSON or XML. In the case of error, the resonse body is in either JSON or XML based on the format requested. The Content-Type header of the response contains the format requested. If the application requests an unsupported format, the response status code is 500. Note that the order of the fields within response body is not specified and might change. Also, additional fields might be added to a response body. Therefore, your applications should use parsing routines that can extract data from a response body in any order.
+
+### Response Errors
+
+After calling an HTTP request, an application should check the response status code to verify success or detect an error. If the response status code indicates an error, the response body contains an error message. The first field is the exception type, currently only RemoteException is returned. The following table lists the items within the RemoteException error message:
+
+|      Item | Data Type |          Description |
+|:---- |:---- |:---- |
+|   exception |   String |         Exception type |
+| javaClassName |   String |  Java class name of exception |
+|    message |   String | Detailed message of exception |
+
+### Response Examples
+
+#### JSON response with single resource
+
+HTTP Request: GET http://rmhost.domain:8088/ws/v1/cluster/app/application\_1324057493980\_0001
+
+Response Status Line: HTTP/1.1 200 OK
+
+Response Header:
+
+      HTTP/1.1 200 OK
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+  app":
+  {
+    "id":"application_1324057493980_0001",
+    "user":"user1",
+    "name":"",
+    "queue":"default",
+    "state":"ACCEPTED",
+    "finalStatus":"UNDEFINED",
+    "progress":0,
+    "trackingUI":"UNASSIGNED",
+    "diagnostics":"",
+    "clusterId":1324057493980,
+    "startedTime":1324057495921,
+    "finishedTime":0,
+    "elapsedTime":2063,
+    "amContainerLogs":"http:\/\/amNM:2\/node\/containerlogs\/container_1324057493980_0001_01_000001",
+    "amHostHttpAddress":"amNM:2"
+  }
+}
+```
+
+#### JSON response with Error response
+
+Here we request information about an application that doesn't exist yet.
+
+HTTP Request: GET http://rmhost.domain:8088/ws/v1/cluster/app/application\_1324057493980\_9999
+
+Response Status Line: HTTP/1.1 404 Not Found
+
+Response Header:
+
+      HTTP/1.1 404 Not Found
+      Content-Type: application/json
+      Transfer-Encoding: chunked
+      Server: Jetty(6.1.26)
+
+Response Body:
+
+```json
+{
+   "RemoteException" : {
+      "javaClassName" : "org.apache.hadoop.yarn.webapp.NotFoundException",
+      "exception" : "NotFoundException",
+      "message" : "java.lang.Exception: app with id: application_1324057493980_9999 not found"
+   }
+}
+```
+
+Sample Usage
+-------------
+
+You can use any number of ways/languages to use the web services REST API's. This example uses the curl command line interface to do the REST GET calls.
+
+In this example, a user submits a MapReduce application to the ResourceManager using a command like:
+
+      hadoop jar hadoop-mapreduce-test.jar sleep -Dmapred.job.queue.name=a1 -m 1 -r 1 -rt 1200000 -mt 20
+
+The client prints information about the job submitted along with the application id, similar to:
+
+    12/01/18 04:25:15 INFO mapred.ResourceMgrDelegate: Submitted application application_1326821518301_0010 to ResourceManager at host.domain.com/10.10.10.10:8032
+    12/01/18 04:25:15 INFO mapreduce.Job: Running job: job_1326821518301_0010
+    12/01/18 04:25:21 INFO mapred.ClientServiceDelegate: The url to track the job: host.domain.com:8088/proxy/application_1326821518301_0010/
+    12/01/18 04:25:22 INFO mapreduce.Job: Job job_1326821518301_0010 running in uber mode : false
+    12/01/18 04:25:22 INFO mapreduce.Job:  map 0% reduce 0%
+
+The user then wishes to track the application. The users starts by getting the information about the application from the ResourceManager. Use the --comopressed option to request output compressed. curl handles uncompressing on client side.
+
+    curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/ws/v1/cluster/apps/application_1326821518301_0010" 
+
+Output:
+
+```json
+{
+   "app" : {
+      "finishedTime" : 0,
+      "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0010_01_000001",
+      "trackingUI" : "ApplicationMaster",
+      "state" : "RUNNING",
+      "user" : "user1",
+      "id" : "application_1326821518301_0010",
+      "clusterId" : 1326821518301,
+      "finalStatus" : "UNDEFINED",
+      "amHostHttpAddress" : "host.domain.com:8042",
+      "progress" : 82.44703,
+      "name" : "Sleep job",
+      "startedTime" : 1326860715335,
+      "elapsedTime" : 31814,
+      "diagnostics" : "",
+      "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326821518301_0010/",
+      "queue" : "a1"
+   }
+}
+```
+
+The user then wishes to get more details about the running application and goes directly to the MapReduce application master for this application. The ResourceManager lists the trackingUrl that can be used for this application: http://host.domain.com:8088/proxy/application\_1326821518301\_0010. This could either go to the web browser or use the web service REST API's. The user uses the web services REST API's to get the list of jobs this MapReduce application master is running:
+
+     curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs"
+
+Output:
+
+```json
+{
+   "jobs" : {
+      "job" : [
+         {
+            "runningReduceAttempts" : 1,
+            "reduceProgress" : 72.104515,
+            "failedReduceAttempts" : 0,
+            "newMapAttempts" : 0,
+            "mapsRunning" : 0,
+            "state" : "RUNNING",
+            "successfulReduceAttempts" : 0,
+            "reducesRunning" : 1,
+            "acls" : [
+               {
+                  "value" : " ",
+                  "name" : "mapreduce.job.acl-modify-job"
+               },
+               {
+                  "value" : " ",
+                  "name" : "mapreduce.job.acl-view-job"
+               }
+            ],
+            "reducesPending" : 0,
+            "user" : "user1",
+            "reducesTotal" : 1,
+            "mapsCompleted" : 1,
+            "startTime" : 1326860720902,
+            "id" : "job_1326821518301_10_10",
+            "successfulMapAttempts" : 1,
+            "runningMapAttempts" : 0,
+            "newReduceAttempts" : 0,
+            "name" : "Sleep job",
+            "mapsPending" : 0,
+            "elapsedTime" : 64432,
+            "reducesCompleted" : 0,
+            "mapProgress" : 100,
+            "diagnostics" : "",
+            "failedMapAttempts" : 0,
+            "killedReduceAttempts" : 0,
+            "mapsTotal" : 1,
+            "uberized" : false,
+            "killedMapAttempts" : 0,
+            "finishTime" : 0
+         }
+      ]
+   }
+}
+```
+
+The user then wishes to get the task details about the job with job id job\_1326821518301\_10\_10 that was listed above.
+
+     curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs/job_1326821518301_10_10/tasks" 
+
+Output:
+
+```json
+{
+   "tasks" : {
+      "task" : [
+         {
+            "progress" : 100,
+            "elapsedTime" : 5059,
+            "state" : "SUCCEEDED",
+            "startTime" : 1326860725014,
+            "id" : "task_1326821518301_10_10_m_0",
+            "type" : "MAP",
+            "successfulAttempt" : "attempt_1326821518301_10_10_m_0_0",
+            "finishTime" : 1326860730073
+         },
+         {
+            "progress" : 72.104515,
+            "elapsedTime" : 0,
+            "state" : "RUNNING",
+            "startTime" : 1326860732984,
+            "id" : "task_1326821518301_10_10_r_0",
+            "type" : "REDUCE",
+            "successfulAttempt" : "",
+            "finishTime" : 0
+         }
+      ]
+   }
+}
+```
+
+The map task has finished but the reduce task is still running. The users wishes to get the task attempt information for the reduce task task\_1326821518301\_10\_10\_r\_0, note that the Accept header isn't really required here since JSON is the default output format:
+
+      curl --compressed -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs/job_1326821518301_10_10/tasks/task_1326821518301_10_10_r_0/attempts"
+
+Output:
+
+```json
+{
+   "taskAttempts" : {
+      "taskAttempt" : [
+         {
+            "elapsedMergeTime" : 158,
+            "shuffleFinishTime" : 1326860735378,
+            "assignedContainerId" : "container_1326821518301_0010_01_000003",
+            "progress" : 72.104515,
+            "elapsedTime" : 0,
+            "state" : "RUNNING",
+            "elapsedShuffleTime" : 2394,
+            "mergeFinishTime" : 1326860735536,
+            "rack" : "/10.10.10.0",
+            "elapsedReduceTime" : 0,
+            "nodeHttpAddress" : "host.domain.com:8042",
+            "type" : "REDUCE",
+            "startTime" : 1326860732984,
+            "id" : "attempt_1326821518301_10_10_r_0_0",
+            "finishTime" : 0
+         }
+      ]
+   }
+}
+```
+
+The reduce attempt is still running and the user wishes to see the current counter values for that attempt:
+
+     curl --compressed -H "Accept: application/json"  -X GET "http://host.domain.com:8088/proxy/application_1326821518301_0010/ws/v1/mapreduce/jobs/job_1326821518301_10_10/tasks/task_1326821518301_10_10_r_0/attempts/attempt_1326821518301_10_10_r_0_0/counters" 
+
+Output:
+
+```json
+{
+   "JobTaskAttemptCounters" : {
+      "taskAttemptCounterGroup" : [
+         {
+            "counterGroupName" : "org.apache.hadoop.mapreduce.FileSystemCounter",
+            "counter" : [
+               {
+                  "value" : 4216,
+                  "name" : "FILE_BYTES_READ"
+               }, 
+               {
+                  "value" : 77151,
+                  "name" : "FILE_BYTES_WRITTEN"
+               }, 
+               {
+                  "value" : 0,
+                  "name" : "FILE_READ_OPS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "FILE_LARGE_READ_OPS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "FILE_WRITE_OPS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "HDFS_BYTES_READ"
+               },
+               {
+                  "value" : 0,
+                  "name" : "HDFS_BYTES_WRITTEN"
+               },
+               {
+                  "value" : 0,
+                  "name" : "HDFS_READ_OPS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "HDFS_LARGE_READ_OPS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "HDFS_WRITE_OPS"
+               }
+            ]  
+         }, 
+         {
+            "counterGroupName" : "org.apache.hadoop.mapreduce.TaskCounter",
+            "counter" : [
+               {
+                  "value" : 0,
+                  "name" : "COMBINE_INPUT_RECORDS"
+               }, 
+               {
+                  "value" : 0,
+                  "name" : "COMBINE_OUTPUT_RECORDS"
+               }, 
+               {  
+                  "value" : 1767,
+                  "name" : "REDUCE_INPUT_GROUPS"
+               },
+               {  
+                  "value" : 25104,
+                  "name" : "REDUCE_SHUFFLE_BYTES"
+               },
+               {
+                  "value" : 1767,
+                  "name" : "REDUCE_INPUT_RECORDS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "REDUCE_OUTPUT_RECORDS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "SPILLED_RECORDS"
+               },
+               {
+                  "value" : 1,
+                  "name" : "SHUFFLED_MAPS"
+               },
+               {
+                  "value" : 0,
+                  "name" : "FAILED_SHUFFLE"
+               },
+               {
+                  "value" : 1,
+                  "name" : "MERGED_MAP_OUTPUTS"
+               },
+               {
+                  "value" : 50,
+                  "name" : "GC_TIME_MILLIS"
+               },
+               {
+                  "value" : 1580,
+                  "name" : "CPU_MILLISECONDS"
+               },
+               {
+                  "value" : 141320192,
+                  "name" : "PHYSICAL_MEMORY_BYTES"
+               },
+              {
+                  "value" : 1118552064,
+                  "name" : "VIRTUAL_MEMORY_BYTES"
+               }, 
+               {  
+                  "value" : 73728000,
+                  "name" : "COMMITTED_HEAP_BYTES"
+               }
+            ]
+         },
+         {  
+            "counterGroupName" : "Shuffle Errors",
+            "counter" : [
+               {  
+                  "value" : 0,
+                  "name" : "BAD_ID"
+               },
+               {  
+                  "value" : 0,
+                  "name" : "CONNECTION"
+               },
+               {  
+                  "value" : 0,
+                  "name" : "IO_ERROR"
+               },
+               {  
+                  "value" : 0,
+                  "name" : "WRONG_LENGTH"
+               },
+               {  
+                  "value" : 0,
+                  "name" : "WRONG_MAP"
+               },
+               {  
+                  "value" : 0,
+                  "name" : "WRONG_REDUCE"
+               }
+            ]
+         },
+         {  
+            "counterGroupName" : "org.apache.hadoop.mapreduce.lib.output.FileOutputFormatCounter",
+            "counter" : [
+              {  
+                  "value" : 0,
+                  "name" : "BYTES_WRITTEN"
+               }
+            ]
+         }
+      ],
+      "id" : "attempt_1326821518301_10_10_r_0_0"
+   }
+}
+```
+
+The job finishes and the user wishes to get the final job information from the history server for this job.
+
+      curl --compressed -X GET "http://host.domain.com:19888/ws/v1/history/mapreduce/jobs/job_1326821518301_10_10" 
+
+Output:
+
+```json
+{
+   "job" : {
+      "avgReduceTime" : 1250784,
+      "failedReduceAttempts" : 0,
+      "state" : "SUCCEEDED",
+      "successfulReduceAttempts" : 1,
+      "acls" : [
+         {
+            "value" : " ",
+            "name" : "mapreduce.job.acl-modify-job"
+         },
+         {
+            "value" : " ",
+            "name" : "mapreduce.job.acl-view-job"
+         }
+      ],
+      "user" : "user1",
+      "reducesTotal" : 1,
+      "mapsCompleted" : 1,
+      "startTime" : 1326860720902,
+      "id" : "job_1326821518301_10_10",
+      "avgMapTime" : 5059,
+      "successfulMapAttempts" : 1,
+      "name" : "Sleep job",
+      "avgShuffleTime" : 2394,
+      "reducesCompleted" : 1,
+      "diagnostics" : "",
+      "failedMapAttempts" : 0,
+      "avgMergeTime" : 2552,
+      "killedReduceAttempts" : 0,
+      "mapsTotal" : 1,
+      "queue" : "a1",
+      "uberized" : false,
+      "killedMapAttempts" : 0,
+      "finishTime" : 1326861986164
+   }
+}
+```
+
+The user also gets the final applications information from the ResourceManager.
+
+      curl --compressed -H "Accept: application/json" -X GET "http://host.domain.com:8088/ws/v1/cluster/apps/application_1326821518301_0010" 
+
+Output:
+
+```json
+{
+   "app" : {
+      "finishedTime" : 1326861991282,
+      "amContainerLogs" : "http://host.domain.com:8042/node/containerlogs/container_1326821518301_0010_01_000001",
+      "trackingUI" : "History",
+      "state" : "FINISHED",
+      "user" : "user1",
+      "id" : "application_1326821518301_0010",
+      "clusterId" : 1326821518301,
+      "finalStatus" : "SUCCEEDED",
+      "amHostHttpAddress" : "host.domain.com:8042",
+      "progress" : 100,
+      "name" : "Sleep job",
+      "startedTime" : 1326860715335,
+      "elapsedTime" : 1275947,
+      "diagnostics" : "",
+      "trackingUrl" : "http://host.domain.com:8088/proxy/application_1326821518301_0010/jobhistory/job/job_1326821518301_10_10",
+      "queue" : "a1"
+   }
+}
+```
\ No newline at end of file


[19/43] hadoop git commit: HADOOP-11634. Description of webhdfs' principal/keytab should switch places each other. Contributed by Brahma Reddy Battula.

Posted by zj...@apache.org.
HADOOP-11634. Description of webhdfs' principal/keytab should switch places each other. Contributed by Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e9ac88aa
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e9ac88aa
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e9ac88aa

Branch: refs/heads/YARN-2928
Commit: e9ac88aac77dd98508854de445793c2180466ee8
Parents: aa55fd3
Author: Tsuyoshi Ozawa <oz...@apache.org>
Authored: Mon Mar 2 04:18:07 2015 +0900
Committer: Tsuyoshi Ozawa <oz...@apache.org>
Committed: Mon Mar 2 04:18:07 2015 +0900

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt                  | 3 +++
 .../hadoop-common/src/site/markdown/SecureMode.md                | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9ac88aa/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index 3c4dc99..f1d48bc 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1019,6 +1019,9 @@ Release 2.7.0 - UNRELEASED
     HADOOP-9922. hadoop windows native build will fail in 32 bit machine.
     (Kiran Kumar M R via cnauroth)
 
+    HADOOP-11634. Description of webhdfs' principal/keytab should switch places
+    each other. (Brahma Reddy Battula via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9ac88aa/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md b/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
index 0004d25..cb27e29 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
@@ -289,8 +289,8 @@ The following properties should be in the `core-site.xml` of all the nodes in th
 
 | Parameter | Value | Notes |
 |:---- |:---- |:---- |
-| `dfs.web.authentication.kerberos.principal` | http/\_HOST@REALM.TLD | Kerberos keytab file for the WebHDFS. |
-| `dfs.web.authentication.kerberos.keytab` | */etc/security/keytab/http.service.keytab* | Kerberos principal name for WebHDFS. |
+| `dfs.web.authentication.kerberos.principal` | http/\_HOST@REALM.TLD | Kerberos principal name for the WebHDFS. |
+| `dfs.web.authentication.kerberos.keytab` | */etc/security/keytab/http.service.keytab* | Kerberos keytab file for WebHDFS. |
 
 ### ResourceManager
 


[41/43] hadoop git commit: Merge remote-tracking branch 'apache/trunk' into YARN-2928

Posted by zj...@apache.org.
Merge remote-tracking branch 'apache/trunk' into YARN-2928

Conflicts:
	hadoop-yarn-project/CHANGES.txt


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/e4d81ebb
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/e4d81ebb
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/e4d81ebb

Branch: refs/heads/YARN-2928
Commit: e4d81ebb335a928d4806cffe556db35208cfd9a9
Parents: bf08f7f 1004473
Author: Zhijie Shen <zj...@apache.org>
Authored: Tue Mar 3 11:11:41 2015 -0800
Committer: Zhijie Shen <zj...@apache.org>
Committed: Tue Mar 3 11:11:41 2015 -0800

----------------------------------------------------------------------
 .../classification/tools/StabilityOptions.java  |    5 +-
 .../AltKerberosAuthenticationHandler.java       |    6 +-
 .../authentication/KerberosTestUtils.java       |   40 +-
 .../authentication/util/TestKerberosUtil.java   |   14 +-
 hadoop-common-project/hadoop-common/CHANGES.txt |   26 +-
 .../org/apache/hadoop/conf/Configuration.java   |    6 +-
 .../org/apache/hadoop/crypto/CipherSuite.java   |    3 +-
 .../hadoop/crypto/key/JavaKeyStoreProvider.java |    3 +-
 .../hadoop/fs/CommonConfigurationKeys.java      |   17 +-
 .../java/org/apache/hadoop/fs/FileSystem.java   |    7 +-
 .../org/apache/hadoop/fs/FilterFileSystem.java  |    2 +-
 .../java/org/apache/hadoop/fs/StorageType.java  |    3 +-
 .../apache/hadoop/fs/permission/AclEntry.java   |    5 +-
 .../org/apache/hadoop/fs/shell/FsUsage.java     |   12 +-
 .../apache/hadoop/fs/shell/XAttrCommands.java   |    2 +-
 .../org/apache/hadoop/fs/shell/find/Name.java   |    5 +-
 .../io/compress/CompressionCodecFactory.java    |   28 +-
 .../hadoop/metrics2/impl/MetricsConfig.java     |    7 +-
 .../hadoop/metrics2/impl/MetricsSystemImpl.java |    5 +-
 .../hadoop/security/SaslPropertiesResolver.java |    3 +-
 .../apache/hadoop/security/SecurityUtil.java    |   12 +-
 .../hadoop/security/WhitelistBasedResolver.java |    3 +-
 .../security/ssl/FileBasedKeyStoresFactory.java |    4 +-
 .../apache/hadoop/security/ssl/SSLFactory.java  |    5 +-
 .../security/ssl/SSLHostnameVerifier.java       |   10 +-
 .../DelegationTokenAuthenticationHandler.java   |    3 +-
 .../web/DelegationTokenAuthenticator.java       |    3 +-
 .../apache/hadoop/util/ComparableVersion.java   |    3 +-
 .../org/apache/hadoop/util/StringUtils.java     |   40 +-
 .../src/site/markdown/SecureMode.md             |    4 +-
 .../src/site/markdown/ServiceLevelAuth.md       |   17 +-
 .../hadoop/fs/FileSystemContractBaseTest.java   |    4 +-
 .../hadoop/io/compress/TestCodecFactory.java    |    3 +-
 .../java/org/apache/hadoop/ipc/TestIPC.java     |    2 +-
 .../java/org/apache/hadoop/ipc/TestSaslRPC.java |    2 +-
 .../hadoop/security/TestSecurityUtil.java       |   10 +-
 .../security/TestUserGroupInformation.java      |    5 +-
 .../hadoop/test/TimedOutTestsListener.java      |    6 +-
 .../org/apache/hadoop/util/TestStringUtils.java |   21 +
 .../org/apache/hadoop/util/TestWinUtils.java    |    6 +-
 .../java/org/apache/hadoop/nfs/NfsExports.java  |    5 +-
 .../server/CheckUploadContentTypeFilter.java    |    4 +-
 .../hadoop/fs/http/server/FSOperations.java     |    7 +-
 .../http/server/HttpFSParametersProvider.java   |    4 +-
 .../org/apache/hadoop/lib/server/Server.java    |    3 +-
 .../service/hadoop/FileSystemAccessService.java |    6 +-
 .../org/apache/hadoop/lib/wsrs/EnumParam.java   |    2 +-
 .../apache/hadoop/lib/wsrs/EnumSetParam.java    |    3 +-
 .../hadoop/lib/wsrs/ParametersProvider.java     |    3 +-
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt     |   31 +
 .../java/org/apache/hadoop/hdfs/DFSClient.java  |   26 +-
 .../org/apache/hadoop/hdfs/DFSOutputStream.java |   15 +-
 .../apache/hadoop/hdfs/RemoteBlockReader2.java  |   24 +-
 .../org/apache/hadoop/hdfs/XAttrHelper.java     |   19 +-
 .../hadoop/hdfs/protocol/HdfsConstants.java     |    3 +-
 .../datatransfer/DataTransferProtoUtil.java     |   26 +
 .../hadoop/hdfs/server/balancer/Dispatcher.java |    9 +-
 .../BlockStoragePolicySuite.java                |    4 +-
 .../hdfs/server/common/HdfsServerConstants.java |   15 +-
 .../hdfs/server/datanode/DataXceiver.java       |   14 +-
 .../hdfs/server/datanode/StorageLocation.java   |    4 +-
 .../hdfs/server/namenode/FSEditLogLoader.java   |    3 -
 .../hdfs/server/namenode/FSEditLogOp.java       |    3 +-
 .../hadoop/hdfs/server/namenode/FSImage.java    |   10 +-
 .../hdfs/server/namenode/FSNamesystem.java      |   11 +-
 .../namenode/QuotaByStorageTypeEntry.java       |    3 +-
 .../hdfs/server/namenode/SecondaryNameNode.java |    2 +-
 .../hdfs/server/namenode/TransferFsImage.java   |    4 +-
 .../org/apache/hadoop/hdfs/tools/DFSck.java     |   31 +-
 .../org/apache/hadoop/hdfs/tools/GetConf.java   |   17 +-
 .../OfflineEditsVisitorFactory.java             |    7 +-
 .../offlineImageViewer/FSImageHandler.java      |    4 +-
 .../org/apache/hadoop/hdfs/web/AuthFilter.java  |    3 +-
 .../org/apache/hadoop/hdfs/web/ParamFilter.java |    3 +-
 .../hadoop/hdfs/web/WebHdfsFileSystem.java      |    5 +-
 .../hadoop/hdfs/web/resources/EnumParam.java    |    3 +-
 .../hadoop/hdfs/web/resources/EnumSetParam.java |    3 +-
 .../src/main/resources/hdfs-default.xml         |   22 +
 .../src/site/markdown/HDFSCommands.md           |    2 +-
 .../src/site/xdoc/HdfsRollingUpgrade.xml        |   11 +-
 .../org/apache/hadoop/hdfs/DFSTestUtil.java     |   12 +
 .../org/apache/hadoop/hdfs/MiniDFSCluster.java  |   10 +
 .../org/apache/hadoop/hdfs/TestDFSShell.java    |   29 +
 .../hdfs/TestRollingUpgradeDowngrade.java       |   12 +-
 .../TestBlocksWithNotEnoughRacks.java           |    7 +-
 .../datanode/TestHdfsServerConstants.java       |    3 -
 .../hadoop/hdfs/server/namenode/TestFsck.java   |   14 +-
 .../namenode/TestFsckWithMultipleNameNodes.java |   20 +
 .../namenode/TestNameNodeOptionParsing.java     |    8 -
 .../namenode/snapshot/TestSnapshotManager.java  |    6 +-
 .../src/test/resources/testHDFSConf.xml         |    4 +-
 hadoop-mapreduce-project/CHANGES.txt            |   12 +
 .../hadoop/mapred/TaskAttemptListenerImpl.java  |    4 +-
 .../jobhistory/JobHistoryEventHandler.java      |    3 +-
 .../hadoop/mapreduce/v2/app/JobEndNotifier.java |    1 -
 .../v2/app/rm/RMContainerAllocator.java         |   65 +-
 .../v2/app/rm/RMContainerRequestor.java         |   74 +-
 .../mapreduce/v2/app/webapp/AppController.java  |    6 +-
 .../v2/app/rm/TestRMContainerAllocator.java     |  214 ++
 .../apache/hadoop/mapreduce/TypeConverter.java  |    3 +-
 .../apache/hadoop/mapreduce/v2/util/MRApps.java |    6 +-
 .../hadoop/mapreduce/TestTypeConverter.java     |    6 +-
 .../hadoop/filecache/DistributedCache.java      |    2 +-
 .../org/apache/hadoop/mapred/ClusterStatus.java |    4 +-
 .../apache/hadoop/mapred/FileOutputFormat.java  |    2 +-
 .../java/org/apache/hadoop/mapred/IFile.java    |    2 +-
 .../apache/hadoop/mapred/JobACLsManager.java    |    1 -
 .../org/apache/hadoop/mapred/JobClient.java     |    8 +-
 .../java/org/apache/hadoop/mapred/JobConf.java  |   49 +-
 .../java/org/apache/hadoop/mapred/Mapper.java   |    2 +-
 .../org/apache/hadoop/mapred/QueueManager.java  |   30 +-
 .../org/apache/hadoop/mapred/RecordReader.java  |    2 +-
 .../java/org/apache/hadoop/mapred/Reducer.java  |   14 +-
 .../java/org/apache/hadoop/mapred/Task.java     |    2 +-
 .../hadoop/mapred/TaskUmbilicalProtocol.java    |    1 -
 .../apache/hadoop/mapred/lib/ChainMapper.java   |   40 +-
 .../apache/hadoop/mapred/lib/ChainReducer.java  |   44 +-
 .../hadoop/mapred/lib/MultipleOutputs.java      |   29 +-
 .../hadoop/mapred/lib/TokenCountMapper.java     |    2 +-
 .../lib/aggregate/ValueAggregatorJob.java       |    2 +-
 .../lib/aggregate/ValueAggregatorReducer.java   |    3 +-
 .../hadoop/mapred/lib/db/DBInputFormat.java     |    4 +-
 .../org/apache/hadoop/mapreduce/Cluster.java    |    1 +
 .../apache/hadoop/mapreduce/ClusterMetrics.java |    6 +-
 .../apache/hadoop/mapreduce/CryptoUtils.java    |   10 +-
 .../java/org/apache/hadoop/mapreduce/Job.java   |    2 +-
 .../org/apache/hadoop/mapreduce/JobContext.java |    2 -
 .../hadoop/mapreduce/JobSubmissionFiles.java    |    2 +-
 .../apache/hadoop/mapreduce/MRJobConfig.java    |    8 +
 .../org/apache/hadoop/mapreduce/Mapper.java     |    9 +-
 .../org/apache/hadoop/mapreduce/Reducer.java    |   12 +-
 .../counters/FileSystemCounterGroup.java        |    4 +-
 .../mapreduce/filecache/DistributedCache.java   |    9 +-
 .../lib/aggregate/ValueAggregatorJob.java       |    2 +-
 .../hadoop/mapreduce/lib/chain/Chain.java       |    4 +-
 .../hadoop/mapreduce/lib/chain/ChainMapper.java |   10 +-
 .../mapreduce/lib/chain/ChainReducer.java       |   14 +-
 .../hadoop/mapreduce/lib/db/DBInputFormat.java  |    7 +-
 .../hadoop/mapreduce/lib/db/DBWritable.java     |    2 +-
 .../mapreduce/lib/join/TupleWritable.java       |    2 +-
 .../mapreduce/lib/map/MultithreadedMapper.java  |    6 +-
 .../mapreduce/lib/output/FileOutputFormat.java  |    2 +-
 .../mapreduce/lib/output/MultipleOutputs.java   |   11 +-
 .../lib/partition/BinaryPartitioner.java        |    2 +-
 .../hadoop/mapreduce/task/JobContextImpl.java   |    2 -
 .../org/apache/hadoop/mapreduce/tools/CLI.java  |    9 +-
 .../src/main/resources/mapred-default.xml       |   16 +
 .../src/site/markdown/HistoryServerRest.md      |    2 +-
 .../java/org/apache/hadoop/fs/TestDFSIO.java    |   18 +-
 .../org/apache/hadoop/fs/TestFileSystem.java    |    4 +-
 .../org/apache/hadoop/fs/slive/Constants.java   |    6 +-
 .../apache/hadoop/fs/slive/OperationData.java   |    3 +-
 .../apache/hadoop/fs/slive/OperationOutput.java |    4 +-
 .../org/apache/hadoop/fs/slive/SliveTest.java   |    3 +-
 .../java/org/apache/hadoop/io/FileBench.java    |   17 +-
 .../org/apache/hadoop/mapred/TestMapRed.java    |    3 +-
 .../hadoop/mapreduce/RandomTextWriter.java      |    4 +-
 .../apache/hadoop/mapreduce/RandomWriter.java   |    5 +-
 .../apache/hadoop/examples/DBCountPageView.java |    2 +-
 .../hadoop/examples/MultiFileWordCount.java     |    2 +-
 .../apache/hadoop/examples/QuasiMonteCarlo.java |    4 +-
 .../hadoop/examples/RandomTextWriter.java       |    4 +-
 .../apache/hadoop/examples/RandomWriter.java    |    5 +-
 .../apache/hadoop/examples/SecondarySort.java   |    2 +-
 .../org/apache/hadoop/examples/pi/DistBbp.java  |    2 +-
 .../apache/hadoop/examples/pi/math/Modular.java |    2 +-
 .../hadoop/examples/terasort/GenSort.java       |    2 +-
 .../plugin/versioninfo/VersionInfoMojo.java     |    4 +-
 .../fs/azure/AzureNativeFileSystemStore.java    |    4 +-
 .../org/apache/hadoop/tools/CopyListing.java    |   14 +-
 .../java/org/apache/hadoop/tools/DistCp.java    |    4 +-
 .../apache/hadoop/tools/DistCpOptionSwitch.java |    2 +-
 .../org/apache/hadoop/tools/OptionsParser.java  |    2 +-
 .../hadoop/tools/mapred/CopyCommitter.java      |    4 +-
 .../apache/hadoop/tools/mapred/CopyMapper.java  |    5 +-
 .../hadoop/tools/mapred/CopyOutputFormat.java   |    4 +-
 .../tools/mapred/RetriableFileCopyCommand.java  |    3 +-
 .../tools/mapred/UniformSizeInputFormat.java    |    4 +-
 .../tools/mapred/lib/DynamicInputFormat.java    |    4 +-
 .../tools/mapred/lib/DynamicRecordReader.java   |   12 +-
 .../apache/hadoop/tools/util/DistCpUtils.java   |   14 +-
 .../hadoop/tools/util/RetriableCommand.java     |    2 +-
 .../hadoop/tools/util/ThrottledInputStream.java |    8 +-
 .../src/main/resources/distcp-default.xml       |   10 -
 .../java/org/apache/hadoop/tools/DistCpV1.java  |    4 +-
 .../java/org/apache/hadoop/tools/Logalyzer.java |    4 +-
 .../gridmix/GridmixJobSubmissionPolicy.java     |    3 +-
 .../ResourceUsageEmulatorPlugin.java            |    2 +-
 .../fs/swift/http/RestClientBindings.java       |    6 +-
 .../hadoop/fs/swift/http/SwiftRestClient.java   |    6 +-
 .../fs/swift/snative/SwiftNativeFileSystem.java |    6 +-
 .../snative/SwiftNativeFileSystemStore.java     |    6 +-
 .../hadoop/fs/swift/util/SwiftTestUtils.java    |    2 +-
 .../TestSwiftFileSystemExtendedContract.java    |    4 +-
 .../hadoop/tools/rumen/HadoopLogsAnalyzer.java  |   33 +-
 .../apache/hadoop/tools/rumen/InputDemuxer.java |    4 +-
 .../apache/hadoop/tools/rumen/JobBuilder.java   |    2 +-
 .../apache/hadoop/tools/rumen/LoggedTask.java   |    3 +-
 .../hadoop/tools/rumen/LoggedTaskAttempt.java   |    3 +-
 .../util/MapReduceJobPropertiesParser.java      |    5 +-
 .../apache/hadoop/tools/rumen/package-info.java |    8 +-
 .../apache/hadoop/streaming/Environment.java    |    3 +-
 hadoop-yarn-project/CHANGES.txt                 |   18 +
 .../records/ApplicationSubmissionContext.java   |    1 +
 .../hadoop/yarn/client/cli/ApplicationCLI.java  |    7 +-
 .../apache/hadoop/yarn/client/cli/NodeCLI.java  |    3 +-
 .../impl/pb/GetApplicationsRequestPBImpl.java   |    6 +-
 .../pb/ApplicationSubmissionContextPBImpl.java  |    3 +-
 .../records/impl/pb/ResourceRequestPBImpl.java  |    4 +-
 .../org/apache/hadoop/yarn/util/FSDownload.java |    6 +-
 .../hadoop/yarn/webapp/hamlet/HamletGen.java    |    6 +-
 .../registry/client/binding/RegistryUtils.java  |    3 +-
 .../webapp/AHSWebServices.java                  |    4 +-
 .../timeline/webapp/TimelineWebServices.java    |    3 +-
 .../hadoop/yarn/server/webapp/WebServices.java  |   18 +-
 .../hadoop-yarn-server-resourcemanager/pom.xml  |    7 +-
 .../server/resourcemanager/ClientRMService.java |    3 +-
 .../resource/ResourceWeights.java               |    3 +-
 .../scheduler/AbstractYarnScheduler.java        |    9 +
 .../scheduler/AppSchedulingInfo.java            |   33 +-
 .../scheduler/ResourceLimits.java               |   40 +
 .../scheduler/ResourceUsage.java                |   61 +-
 .../scheduler/SchedulerApplicationAttempt.java  |    6 +-
 .../scheduler/capacity/AbstractCSQueue.java     |   24 +-
 .../scheduler/capacity/CSQueue.java             |   11 +-
 .../scheduler/capacity/CSQueueUtils.java        |   48 -
 .../capacity/CapacityHeadroomProvider.java      |   16 +-
 .../scheduler/capacity/CapacityScheduler.java   |   30 +-
 .../CapacitySchedulerConfiguration.java         |    4 +-
 .../scheduler/capacity/LeafQueue.java           |  131 +-
 .../scheduler/capacity/ParentQueue.java         |   53 +-
 .../fair/FairSchedulerConfiguration.java        |    3 +-
 .../scheduler/fair/SchedulingPolicy.java        |    3 +-
 .../server/resourcemanager/webapp/AppBlock.java |   46 +-
 .../server/resourcemanager/webapp/AppPage.java  |    4 +
 .../resourcemanager/webapp/AppsBlock.java       |    5 +-
 .../webapp/FairSchedulerAppsBlock.java          |    5 +-
 .../resourcemanager/webapp/NodesPage.java       |    2 +-
 .../resourcemanager/webapp/RMWebServices.java   |   26 +-
 .../resourcemanager/webapp/dao/AppInfo.java     |   17 +-
 .../yarn/server/resourcemanager/MockAM.java     |   11 +-
 .../scheduler/TestResourceUsage.java            |    2 +-
 .../capacity/TestApplicationLimits.java         |   32 +-
 .../scheduler/capacity/TestCSQueueUtils.java    |  250 --
 .../capacity/TestCapacityScheduler.java         |   85 +-
 .../scheduler/capacity/TestChildQueueOrder.java |   36 +-
 .../scheduler/capacity/TestLeafQueue.java       |  221 +-
 .../scheduler/capacity/TestParentQueue.java     |  106 +-
 .../scheduler/capacity/TestReservations.java    |  100 +-
 .../webapp/TestRMWebAppFairScheduler.java       |   10 +-
 .../webapp/TestRMWebServicesApps.java           |    3 +-
 .../src/site/apt/CapacityScheduler.apt.vm       |  368 ---
 .../src/site/apt/DockerContainerExecutor.apt.vm |  204 --
 .../src/site/apt/FairScheduler.apt.vm           |  483 ---
 .../src/site/apt/NodeManager.apt.vm             |   64 -
 .../src/site/apt/NodeManagerCgroups.apt.vm      |   77 -
 .../src/site/apt/NodeManagerRest.apt.vm         |  645 ----
 .../src/site/apt/NodeManagerRestart.apt.vm      |   86 -
 .../src/site/apt/ResourceManagerHA.apt.vm       |  233 --
 .../src/site/apt/ResourceManagerRest.apt.vm     | 3104 ------------------
 .../src/site/apt/ResourceManagerRestart.apt.vm  |  298 --
 .../src/site/apt/SecureContainer.apt.vm         |  176 -
 .../src/site/apt/TimelineServer.apt.vm          |  260 --
 .../src/site/apt/WebApplicationProxy.apt.vm     |   49 -
 .../src/site/apt/WebServicesIntro.apt.vm        |  593 ----
 .../src/site/apt/WritingYarnApplications.apt.vm |  757 -----
 .../hadoop-yarn-site/src/site/apt/YARN.apt.vm   |   77 -
 .../src/site/apt/YarnCommands.apt.vm            |  369 ---
 .../hadoop-yarn-site/src/site/apt/index.apt.vm  |   82 -
 .../src/site/markdown/CapacityScheduler.md      |  186 ++
 .../site/markdown/DockerContainerExecutor.md.vm |  154 +
 .../src/site/markdown/FairScheduler.md          |  235 ++
 .../src/site/markdown/NodeManager.md            |   57 +
 .../src/site/markdown/NodeManagerCgroups.md     |   57 +
 .../src/site/markdown/NodeManagerRest.md        |  543 +++
 .../src/site/markdown/NodeManagerRestart.md     |   53 +
 .../src/site/markdown/ResourceManagerHA.md      |  140 +
 .../src/site/markdown/ResourceManagerRest.md    | 2640 +++++++++++++++
 .../src/site/markdown/ResourceManagerRestart.md |  181 +
 .../src/site/markdown/SecureContainer.md        |  135 +
 .../src/site/markdown/TimelineServer.md         |  231 ++
 .../src/site/markdown/WebApplicationProxy.md    |   24 +
 .../src/site/markdown/WebServicesIntro.md       |  569 ++++
 .../site/markdown/WritingYarnApplications.md    |  591 ++++
 .../hadoop-yarn-site/src/site/markdown/YARN.md  |   42 +
 .../src/site/markdown/YarnCommands.md           |  272 ++
 .../hadoop-yarn-site/src/site/markdown/index.md |   75 +
 287 files changed, 8338 insertions(+), 9250 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/e4d81ebb/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------


[11/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
deleted file mode 100644
index 36b8621..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRest.apt.vm
+++ /dev/null
@@ -1,645 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  NodeManager REST API's.
-  ---
-  ---
-  ${maven.build.timestamp}
-
-NodeManager REST API's.
-
-%{toc|section=1|fromDepth=0|toDepth=2}
-
-* Overview
-
-  The NodeManager REST API's allow the user to get status on the node and information about applications and containers running on that node. 
-  
-* NodeManager Information API
-
-  The node information resource provides overall information about that particular node.
-
-** URI
-
-  Both of the following URI's give you the cluster information.
-
-------
-  * http://<nm http address:port>/ws/v1/node
-  * http://<nm http address:port>/ws/v1/node/info
-------
-
-** HTTP Operations Supported
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <nodeInfo> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| id            | long         | The NodeManager id |
-*---------------+--------------+-------------------------------+
-| nodeHostName | string  | The host name of the NodeManager |
-*---------------+--------------+-------------------------------+
-| totalPmemAllocatedContainersMB | long         | The amount of physical memory allocated for use by containers in MB |
-*---------------+--------------+-------------------------------+
-| totalVmemAllocatedContainersMB | long         | The amount of virtual memory allocated for use by containers in MB |
-*---------------+--------------+-------------------------------+
-| totalVCoresAllocatedContainers | long         | The number of virtual cores allocated for use by containers |
-*---------------+--------------+-------------------------------+
-| lastNodeUpdateTime | long         | The last timestamp at which the health report was received (in ms since epoch)|
-*---------------+--------------+-------------------------------+
-| healthReport | string  | The diagnostic health report of the node |
-*---------------+--------------+-------------------------------+
-| nodeHealthy | boolean | true/false indicator of if the node is healthy|
-*---------------+--------------+-------------------------------+
-| nodeManagerVersion | string  | Version of the NodeManager |
-*---------------+--------------+-------------------------------+
-| nodeManagerBuildVersion | string  | NodeManager build string with build version, user, and checksum |
-*---------------+--------------+-------------------------------+
-| nodeManagerVersionBuiltOn | string  | Timestamp when NodeManager was built(in ms since epoch) |
-*---------------+--------------+-------------------------------+
-| hadoopVersion | string  | Version of hadoop common |
-*---------------+--------------+-------------------------------+
-| hadoopBuildVersion | string  | Hadoop common build string with build version, user, and checksum |
-*---------------+--------------+-------------------------------+
-| hadoopVersionBuiltOn | string  | Timestamp when hadoop common was built(in ms since epoch) |
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/info
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "nodeInfo" : {
-      "hadoopVersionBuiltOn" : "Mon Jan  9 14:58:42 UTC 2012",
-      "nodeManagerBuildVersion" : "0.23.1-SNAPSHOT from 1228355 by user1 source checksum 20647f76c36430e888cc7204826a445c",
-      "lastNodeUpdateTime" : 1326222266126,
-      "totalVmemAllocatedContainersMB" : 17203,
-      "totalVCoresAllocatedContainers" : 8,
-      "nodeHealthy" : true,
-      "healthReport" : "",
-      "totalPmemAllocatedContainersMB" : 8192,
-      "nodeManagerVersionBuiltOn" : "Mon Jan  9 15:01:59 UTC 2012",
-      "nodeManagerVersion" : "0.23.1-SNAPSHOT",
-      "id" : "host.domain.com:8041",
-      "hadoopBuildVersion" : "0.23.1-SNAPSHOT from 1228292 by user1 source checksum 3eba233f2248a089e9b28841a784dd00",
-      "nodeHostName" : "host.domain.com",
-      "hadoopVersion" : "0.23.1-SNAPSHOT"
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
------
-  Accept: application/xml
-  GET http://<nm http address:port>/ws/v1/node/info
------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 983
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<nodeInfo>
-  <healthReport/>
-  <totalVmemAllocatedContainersMB>17203</totalVmemAllocatedContainersMB>
-  <totalPmemAllocatedContainersMB>8192</totalPmemAllocatedContainersMB>
-  <totalVCoresAllocatedContainers>8</totalVCoresAllocatedContainers>
-  <lastNodeUpdateTime>1326222386134</lastNodeUpdateTime>
-  <nodeHealthy>true</nodeHealthy>
-  <nodeManagerVersion>0.23.1-SNAPSHOT</nodeManagerVersion>
-  <nodeManagerBuildVersion>0.23.1-SNAPSHOT from 1228355 by user1 source checksum 20647f76c36430e888cc7204826a445c</nodeManagerBuildVersion>
-  <nodeManagerVersionBuiltOn>Mon Jan  9 15:01:59 UTC 2012</nodeManagerVersionBuiltOn>
-  <hadoopVersion>0.23.1-SNAPSHOT</hadoopVersion>
-  <hadoopBuildVersion>0.23.1-SNAPSHOT from 1228292 by user1 source checksum 3eba233f2248a089e9b28841a784dd00</hadoopBuildVersion>
-  <hadoopVersionBuiltOn>Mon Jan  9 14:58:42 UTC 2012</hadoopVersionBuiltOn>
-  <id>host.domain.com:8041</id>
-  <nodeHostName>host.domain.com</nodeHostName>
-</nodeInfo>
-+---+
-
-* Applications API
-
-  With the Applications API, you can obtain a collection of resources, each of which represents an application.  When you run a GET operation on this resource, you obtain a collection of Application Objects.  See also {{Application API}} for syntax of the application object.
-
-** URI
-
-------
-  * http://<nm http address:port>/ws/v1/node/apps
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-  Multiple paramters can be specified.  
-
-------
-  * state - application state 
-  * user - user name
-------
-
-** Elements of the <apps> (Applications) object
-
-  When you make a request for the list of applications, the information will be returned as a collection of app objects. 
-  See also {{Application API}} for syntax of the app object.
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| app | array of app objects(JSON)/zero or more app objects(XML) | A collection of application objects |
-*---------------+--------------+--------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/apps
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "apps" : {
-      "app" : [
-         {
-            "containerids" : [
-               "container_1326121700862_0003_01_000001",
-               "container_1326121700862_0003_01_000002"
-            ],
-            "user" : "user1",
-            "id" : "application_1326121700862_0003",
-            "state" : "RUNNING"
-         },
-         {
-            "user" : "user1",
-            "id" : "application_1326121700862_0002",
-            "state" : "FINISHED"
-         }
-      ]
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/apps
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 400
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<apps>
-  <app>
-    <id>application_1326121700862_0002</id>
-    <state>FINISHED</state>
-    <user>user1</user>
-  </app>
-  <app>
-    <id>application_1326121700862_0003</id>
-    <state>RUNNING</state>
-    <user>user1</user>
-    <containerids>container_1326121700862_0003_01_000002</containerids>
-    <containerids>container_1326121700862_0003_01_000001</containerids>
-  </app>
-</apps>
-
-+---+
-
-* {Application API}
-
-  An application resource contains information about a particular application that was run or is running on this NodeManager.
-
-** URI
-
-  Use the following URI to obtain an app Object, for a application identified by the {appid} value. 
-
-------
-  * http://<nm http address:port>/ws/v1/node/apps/{appid}
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <app> (Application) object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                  |
-*---------------+--------------+-------------------------------+
-| id | string  | The application id | 
-*---------------+--------------+--------------------------------+
-| user | string  | The user who started the application |
-*---------------+--------------+--------------------------------+
-| state | string | The state of the application -  valid states are: NEW, INITING, RUNNING, FINISHING_CONTAINERS_WAIT, APPLICATION_RESOURCES_CLEANINGUP, FINISHED |
-*---------------+--------------+--------------------------------+
-| containerids | array of containerids(JSON)/zero or more containerids(XML) | The list of containerids currently being used by the application on this node. If not present then no containers are currently running for this application.|
-*---------------+--------------+--------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/apps/application_1326121700862_0005
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "app" : {
-      "containerids" : [
-         "container_1326121700862_0005_01_000003",
-         "container_1326121700862_0005_01_000001"
-      ],
-      "user" : "user1",
-      "id" : "application_1326121700862_0005",
-      "state" : "RUNNING"
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/apps/application_1326121700862_0005
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 281 
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<app>
-  <id>application_1326121700862_0005</id>
-  <state>RUNNING</state>
-  <user>user1</user>
-  <containerids>container_1326121700862_0005_01_000003</containerids>
-  <containerids>container_1326121700862_0005_01_000001</containerids>
-</app>
-+---+
-
-
-* Containers API
-
-  With the containers API, you can obtain a collection of resources, each of which  represents a container.  When you run a GET operation on this resource, you obtain a collection of Container Objects. See also {{Container API}} for syntax of the container object.
-
-** URI
-
-------
-  * http://<nm http address:port>/ws/v1/node/containers
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <containers> object
-
-  When you make a request for the list of containers, the information will be returned as collection of container objects. 
-  See also {{Container API}} for syntax of the container object.
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| containers | array of container objects(JSON)/zero or more container objects(XML) | A collection of container objects |
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/containers
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "containers" : {
-      "container" : [
-         {
-            "nodeId" : "host.domain.com:8041",
-            "totalMemoryNeededMB" : 2048,
-            "totalVCoresNeeded" : 1,
-            "state" : "RUNNING",
-            "diagnostics" : "",
-            "containerLogsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000001/user1",
-            "user" : "user1",
-            "id" : "container_1326121700862_0006_01_000001",
-            "exitCode" : -1000
-         },
-         {
-            "nodeId" : "host.domain.com:8041",
-            "totalMemoryNeededMB" : 2048,
-            "totalVCoresNeeded" : 2,
-            "state" : "RUNNING",
-            "diagnostics" : "",
-            "containerLogsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000003/user1",
-            "user" : "user1",
-            "id" : "container_1326121700862_0006_01_000003",
-            "exitCode" : -1000
-         }
-      ]
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/containers
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 988
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<containers>
-  <container>
-    <id>container_1326121700862_0006_01_000001</id>
-    <state>RUNNING</state>
-    <exitCode>-1000</exitCode>
-    <diagnostics/>
-    <user>user1</user>
-    <totalMemoryNeededMB>2048</totalMemoryNeededMB>
-    <totalVCoresNeeded>1</totalVCoresNeeded>
-    <containerLogsLink>http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000001/user1</containerLogsLink>
-    <nodeId>host.domain.com:8041</nodeId>
-  </container>
-  <container>
-    <id>container_1326121700862_0006_01_000003</id>
-    <state>DONE</state>
-    <exitCode>0</exitCode>
-    <diagnostics>Container killed by the ApplicationMaster.</diagnostics>
-    <user>user1</user>
-    <totalMemoryNeededMB>2048</totalMemoryNeededMB>
-    <totalVCoresNeeded>2</totalVCoresNeeded>
-    <containerLogsLink>http://host.domain.com:8042/node/containerlogs/container_1326121700862_0006_01_000003/user1</containerLogsLink>
-    <nodeId>host.domain.com:8041</nodeId>
-  </container>
-</containers>
-+---+
-
-
-* {Container API}
-
-  A container resource contains information about a particular container that is running on this NodeManager.
-
-** URI
-
-  Use the following URI to obtain a Container Object, from a container identified by the {containerid} value. 
-
-------
-  * http://<nm http address:port>/ws/v1/node/containers/{containerid}
-------
-
-** HTTP Operations Supported 
-
-------
-  * GET
-------
-
-** Query Parameters Supported
-
-------
-  None
-------
-
-** Elements of the <container> object
-
-*---------------+--------------+-------------------------------+
-|| Item         || Data Type   || Description                   |
-*---------------+--------------+-------------------------------+
-| id | string  | The container id |
-*---------------+--------------+-------------------------------+
-| state | string | State of the container - valid states are: NEW, LOCALIZING, LOCALIZATION_FAILED, LOCALIZED, RUNNING, EXITED_WITH_SUCCESS, EXITED_WITH_FAILURE, KILLING, CONTAINER_CLEANEDUP_AFTER_KILL, CONTAINER_RESOURCES_CLEANINGUP, DONE|
-*---------------+--------------+-------------------------------+
-| nodeId | string  | The id of the node the container is on|
-*---------------+--------------+-------------------------------+
-| containerLogsLink | string  | The http link to the container logs |
-*---------------+--------------+-------------------------------+
-| user | string  | The user name of the user which started the container|
-*---------------+--------------+-------------------------------+
-| exitCode | int | Exit code of the container |
-*---------------+--------------+-------------------------------+
-| diagnostics | string | A diagnostic message for failed containers |
-*---------------+--------------+-------------------------------+
-| totalMemoryNeededMB | long | Total amout of memory needed by the container (in MB) |
-*---------------+--------------+-------------------------------+
-| totalVCoresNeeded | long | Total number of virtual cores needed by the container |
-*---------------+--------------+-------------------------------+
-
-** Response Examples
-
-  <<JSON response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/nodes/containers/container_1326121700862_0007_01_000001
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/json
-  Transfer-Encoding: chunked
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-{
-   "container" : {
-      "nodeId" : "host.domain.com:8041",
-      "totalMemoryNeededMB" : 2048,
-      "totalVCoresNeeded" : 1,
-      "state" : "RUNNING",
-      "diagnostics" : "",
-      "containerLogsLink" : "http://host.domain.com:8042/node/containerlogs/container_1326121700862_0007_01_000001/user1",
-      "user" : "user1",
-      "id" : "container_1326121700862_0007_01_000001",
-      "exitCode" : -1000
-   }
-}
-+---+
-
-  <<XML response>>
-
-  HTTP Request:
-
-------
-  GET http://<nm http address:port>/ws/v1/node/containers/container_1326121700862_0007_01_000001
-  Accept: application/xml
-------
-
-  Response Header:
-
-+---+
-  HTTP/1.1 200 OK
-  Content-Type: application/xml
-  Content-Length: 491 
-  Server: Jetty(6.1.26)
-+---+
-
-  Response Body:
-
-+---+
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<container>
-  <id>container_1326121700862_0007_01_000001</id>
-  <state>RUNNING</state>
-  <exitCode>-1000</exitCode>
-  <diagnostics/>
-  <user>user1</user>
-  <totalMemoryNeededMB>2048</totalMemoryNeededMB>
-  <totalVCoresNeeded>1</totalVCoresNeeded>
-  <containerLogsLink>http://host.domain.com:8042/node/containerlogs/container_1326121700862_0007_01_000001/user1</containerLogsLink>
-  <nodeId>host.domain.com:8041</nodeId>
-</container>
-+---+
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRestart.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRestart.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRestart.apt.vm
deleted file mode 100644
index ba03f4e..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/NodeManagerRestart.apt.vm
+++ /dev/null
@@ -1,86 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  NodeManager Restart
-  ---
-  ---
-  ${maven.build.timestamp}
-
-NodeManager Restart
-
-* Introduction
-
-  This document gives an overview of NodeManager (NM) restart, a feature that
-  enables the NodeManager to be restarted without losing 
-  the active containers running on the node. At a high level, the NM stores any 
-  necessary state to a local state-store as it processes container-management
-  requests. When the NM restarts, it recovers by first loading state for
-  various subsystems and then letting those subsystems perform recovery using
-  the loaded state.
-
-* Enabling NM Restart
-
-  [[1]] To enable NM Restart functionality, set the following property in <<conf/yarn-site.xml>> to true:
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Value                                |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.nodemanager.recovery.enabled>>> | |
-| | <<<true>>>, (default value is set to false) |
-*--------------------------------------+--------------------------------------+ 
-
-  [[2]] Configure a path to the local file-system directory where the
-  NodeManager can save its run state
-
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.nodemanager.recovery.dir>>> | |
-| | The local filesystem directory in which the node manager will store state |
-| | when recovery is enabled.  |
-| | The default value is set to |
-| | <<<${hadoop.tmp.dir}/yarn-nm-recovery>>>. |
-*--------------------------------------+--------------------------------------+ 
-
-  [[3]] Configure a valid RPC address for the NodeManager
-  
-*--------------------------------------+--------------------------------------+
-|| Property                            || Description                        |
-*--------------------------------------+--------------------------------------+
-| <<<yarn.nodemanager.address>>> | |
-| |   Ephemeral ports (port 0, which is default) cannot be used for the |
-| | NodeManager's RPC server specified via yarn.nodemanager.address as it can |
-| | make NM use different ports before and after a restart. This will break any |
-| | previously running clients that were communicating with the NM before |
-| | restart. Explicitly setting yarn.nodemanager.address to an address with |
-| | specific port number (for e.g 0.0.0.0:45454) is a precondition for enabling |
-| | NM restart. |
-*--------------------------------------+--------------------------------------+
-
-  [[4]] Auxiliary services
-  
-  NodeManagers in a YARN cluster can be configured to run auxiliary services.
-  For a completely functional NM restart, YARN relies on any auxiliary service
-  configured to also support recovery. This usually includes (1) avoiding usage
-  of ephemeral ports so that previously running clients (in this case, usually
-  containers) are not disrupted after restart and (2) having the auxiliary
-  service itself support recoverability by reloading any previous state when
-  NodeManager restarts and reinitializes the auxiliary service.
-  
-  A simple example for the above is the auxiliary service 'ShuffleHandler' for
-  MapReduce (MR). ShuffleHandler respects the above two requirements already,
-  so users/admins don't have do anything for it to support NM restart: (1) The
-  configuration property <<mapreduce.shuffle.port>> controls which port the
-  ShuffleHandler on a NodeManager host binds to, and it defaults to a
-  non-ephemeral port. (2) The ShuffleHandler service also already supports
-  recovery of previous state after NM restarts.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerHA.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerHA.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerHA.apt.vm
deleted file mode 100644
index 0346cda..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ResourceManagerHA.apt.vm
+++ /dev/null
@@ -1,233 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  ResourceManager High Availability
-  ---
-  ---
-  ${maven.build.timestamp}
-
-ResourceManager High Availability
-
-%{toc|section=1|fromDepth=0}
-
-* Introduction
-
-  This guide provides an overview of High Availability of YARN's ResourceManager,
-  and details how to configure and use this feature. The ResourceManager (RM)
-  is responsible for tracking the resources in a cluster, and scheduling
-  applications (e.g., MapReduce jobs). Prior to Hadoop 2.4, the ResourceManager
-  is the single point of failure in a YARN cluster. The High Availability
-  feature adds redundancy in the form of an Active/Standby ResourceManager pair
-  to remove this otherwise single point of failure.
-
-* Architecture
-
-[images/rm-ha-overview.png] Overview of ResourceManager High Availability
-
-** RM Failover
-
-  ResourceManager HA is realized through an Active/Standby architecture - at
-  any point of time, one of the RMs is Active, and one or more RMs are in
-  Standby mode waiting to take over should anything happen to the Active.
-  The trigger to transition-to-active comes from either the admin (through CLI)
-  or through the integrated failover-controller when automatic-failover is
-  enabled.
-
-*** Manual transitions and failover
-
-    When automatic failover is not enabled, admins have to manually transition
-    one of the RMs to Active. To failover from one RM to the other, they are
-    expected to first transition the Active-RM to Standby and transition a
-    Standby-RM to Active. All this can be done using the "<<<yarn rmadmin>>>"
-    CLI.
-
-*** Automatic failover
-
-    The RMs have an option to embed the Zookeeper-based ActiveStandbyElector to
-    decide which RM should be the Active. When the Active goes down or becomes
-    unresponsive, another RM is automatically elected to be the Active which
-    then takes over. Note that, there is no need to run a separate ZKFC daemon
-    as is the case for HDFS because ActiveStandbyElector embedded in RMs acts
-    as a failure detector and a leader elector instead of a separate ZKFC
-    deamon.
-
-*** Client, ApplicationMaster and NodeManager on RM failover
-
-    When there are multiple RMs, the configuration (yarn-site.xml) used by
-    clients and nodes is expected to list all the RMs. Clients,
-    ApplicationMasters (AMs) and NodeManagers (NMs) try connecting to the RMs in
-    a round-robin fashion until they hit the Active RM. If the Active goes down,
-    they resume the round-robin polling until they hit the "new" Active.
-    This default retry logic is implemented as
-    <<<org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider>>>.
-    You can override the logic by
-    implementing <<<org.apache.hadoop.yarn.client.RMFailoverProxyProvider>>> and
-    setting the value of <<<yarn.client.failover-proxy-provider>>> to
-    the class name.
-
-** Recovering prevous active-RM's state
-
-   With the {{{./ResourceManagerRestart.html}ResourceManger Restart}} enabled,
-   the RM being promoted to an active state loads the RM internal state and
-   continues to operate from where the previous active left off as much as
-   possible depending on the RM restart feature. A new attempt is spawned for
-   each managed application previously submitted to the RM. Applications can
-   checkpoint periodically to avoid losing any work. The state-store must be
-   visible from the both of Active/Standby RMs. Currently, there are two
-   RMStateStore implementations for persistence - FileSystemRMStateStore
-   and ZKRMStateStore.  The <<<ZKRMStateStore>>> implicitly allows write access
-   to a single RM at any point in time, and hence is the recommended store to
-   use in an HA cluster. When using the ZKRMStateStore, there is no need for a
-   separate fencing mechanism to address a potential split-brain situation
-   where multiple RMs can potentially assume the Active role.
-
-
-* Deployment
-
-** Configurations
-
-   Most of the failover functionality is tunable using various configuration
-   properties. Following is a list of required/important ones. yarn-default.xml
-   carries a full-list of knobs. See
-   {{{../hadoop-yarn-common/yarn-default.xml}yarn-default.xml}}
-   for more information including default values.
-   See {{{./ResourceManagerRestart.html}the document for ResourceManger
-   Restart}} also for instructions on setting up the state-store.
-
-*-------------------------+----------------------------------------------+
-|| Configuration Property || Description                                 |
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.zk-address | |
-| | Address of the ZK-quorum.
-| | Used both for the state-store and embedded leader-election.
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.ha.enabled | |
-| | Enable RM HA
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.ha.rm-ids | |
-| | List of logical IDs for the RMs. |
-| | e.g., "rm1,rm2" |
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.hostname.<rm-id> | |
-| | For each <rm-id>, specify the hostname the |
-| | RM corresponds to. Alternately, one could set each of the RM's service |
-| | addresses. |
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.ha.id | |
-| | Identifies the RM in the ensemble. This is optional; |
-| | however, if set, admins have to ensure that all the RMs have their own |
-| | IDs in the config |
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.ha.automatic-failover.enabled | |
-| | Enable automatic failover; |
-| | By default, it is enabled only when HA is enabled. |
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.ha.automatic-failover.embedded | |
-| | Use embedded leader-elector |
-| | to pick the Active RM, when automatic failover is enabled. By default, |
-| | it is enabled only when HA is enabled. |
-*-------------------------+----------------------------------------------+
-| yarn.resourcemanager.cluster-id | |
-| | Identifies the cluster. Used by the elector to |
-| | ensure an RM doesn't take over as Active for another cluster. |
-*-------------------------+----------------------------------------------+
-| yarn.client.failover-proxy-provider | |
-| | The class to be used by Clients, AMs and NMs to failover to the Active RM. |
-*-------------------------+----------------------------------------------+
-| yarn.client.failover-max-attempts | |
-| | The max number of times FailoverProxyProvider should attempt failover. |
-*-------------------------+----------------------------------------------+
-| yarn.client.failover-sleep-base-ms | |
-| | The sleep base (in milliseconds) to be used for calculating |
-| | the exponential delay between failovers. |
-*-------------------------+----------------------------------------------+
-| yarn.client.failover-sleep-max-ms | |
-| | The maximum sleep time (in milliseconds) between failovers |
-*-------------------------+----------------------------------------------+
-| yarn.client.failover-retries | |
-| | The number of retries per attempt to connect to a ResourceManager. |
-*-------------------------+----------------------------------------------+
-| yarn.client.failover-retries-on-socket-timeouts | |
-| | The number of retries per attempt to connect to a ResourceManager on socket timeouts. |
-*-------------------------+----------------------------------------------+
-
-*** Sample configurations
-
-  Here is the sample of minimal setup for RM failover.
-
-+---+
- <property>
-   <name>yarn.resourcemanager.ha.enabled</name>
-   <value>true</value>
- </property>
- <property>
-   <name>yarn.resourcemanager.cluster-id</name>
-   <value>cluster1</value>
- </property>
- <property>
-   <name>yarn.resourcemanager.ha.rm-ids</name>
-   <value>rm1,rm2</value>
- </property>
- <property>
-   <name>yarn.resourcemanager.hostname.rm1</name>
-   <value>master1</value>
- </property>
- <property>
-   <name>yarn.resourcemanager.hostname.rm2</name>
-   <value>master2</value>
- </property>
- <property>
-   <name>yarn.resourcemanager.zk-address</name>
-   <value>zk1:2181,zk2:2181,zk3:2181</value>
- </property>
-+---+
-
-** Admin commands
-
-   <<<yarn rmadmin>>> has a few HA-specific command options to check the health/state of an
-   RM, and transition to Active/Standby.
-   Commands for HA take service id of RM set by <<<yarn.resourcemanager.ha.rm-ids>>>
-   as argument.
-
-+---+
- $ yarn rmadmin -getServiceState rm1
- active
- 
- $ yarn rmadmin -getServiceState rm2
- standby
-+---+
-
-   If automatic failover is enabled, you can not use manual transition command.
-   Though you can override this by --forcemanual flag, you need caution.
-
-+---+
- $ yarn rmadmin -transitionToStandby rm1
- Automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@1d8299fd
- Refusing to manually manage HA state, since it may cause
- a split-brain scenario or other incorrect state.
- If you are very sure you know what you are doing, please
- specify the forcemanual flag.
-+---+
-
-   See {{{./YarnCommands.html}YarnCommands}} for more details.
-
-** ResourceManager Web UI services
-
-   Assuming a standby RM is up and running, the Standby automatically redirects
-   all web requests to the Active, except for the "About" page.
-
-** Web Services
-
-   Assuming a standby RM is up and running, RM web-services described at
-   {{{./ResourceManagerRest.html}ResourceManager REST APIs}} when invoked on
-   a standby RM are automatically redirected to the Active RM.


[17/43] hadoop git commit: HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java (Ayappan via aw)

Posted by zj...@apache.org.
HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks fails using IBM java (Ayappan via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dbc9b643
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dbc9b643
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dbc9b643

Branch: refs/heads/YARN-2928
Commit: dbc9b6433e9276057181cf4927cedf321acd354e
Parents: b01d343
Author: Allen Wittenauer <aw...@apache.org>
Authored: Sat Feb 28 23:32:09 2015 -0800
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Sat Feb 28 23:32:09 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt             |  3 +++
 .../test/java/org/apache/hadoop/hdfs/DFSTestUtil.java   | 12 ++++++++++++
 .../java/org/apache/hadoop/hdfs/MiniDFSCluster.java     | 10 ++++++++++
 .../blockmanagement/TestBlocksWithNotEnoughRacks.java   |  7 ++++---
 4 files changed, 29 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbc9b643/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 2a8da43..16fe394 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -306,6 +306,9 @@ Trunk (Unreleased)
     HDFS-7803. Wrong command mentioned in HDFSHighAvailabilityWithQJM
     documentation (Arshad Mohammad via aw)
 
+    HDFS-4681. TestBlocksWithNotEnoughRacks#testCorruptBlockRereplicatedAcrossRacks 
+    fails using IBM java (Ayappan via aw)
+
 Release 2.7.0 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbc9b643/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
index 5b391c5..7e7ff39 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
@@ -251,6 +251,12 @@ public class DFSTestUtil {
   public void createFiles(FileSystem fs, String topdir) throws IOException {
     createFiles(fs, topdir, (short)3);
   }
+
+  public static byte[] readFileAsBytes(FileSystem fs, Path fileName) throws IOException {
+    ByteArrayOutputStream os = new ByteArrayOutputStream();
+    IOUtils.copyBytes(fs.open(fileName), os, 1024, true);
+    return os.toByteArray();
+  }
   
   /** create nFiles with random names and directory hierarchies
    *  with random (but reproducible) data in them.
@@ -723,6 +729,12 @@ public class DFSTestUtil {
     return b.toString();
   }
 
+  public static byte[] readFileAsBytes(File f) throws IOException {
+    ByteArrayOutputStream os = new ByteArrayOutputStream();
+    IOUtils.copyBytes(new FileInputStream(f), os, 1024, true);
+    return os.toByteArray();
+  }
+
   /* Write the given string to the given file */
   public static void writeFile(FileSystem fs, Path p, String s) 
       throws IOException {

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbc9b643/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
index 5297ba2..2c1d07e 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java
@@ -1869,6 +1869,16 @@ public class MiniDFSCluster {
     return null;
   }
 
+  public byte[] readBlockOnDataNodeAsBytes(int i, ExtendedBlock block)
+      throws IOException {
+    assert (i >= 0 && i < dataNodes.size()) : "Invalid datanode "+i;
+    File blockFile = getBlockFile(i, block);
+    if (blockFile != null && blockFile.exists()) {
+      return DFSTestUtil.readFileAsBytes(blockFile);
+    }
+    return null;
+  }
+
   /**
    * Corrupt a block on a particular datanode.
    *

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dbc9b643/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
index 1bc7cdc..54983a1 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
@@ -20,6 +20,7 @@ package org.apache.hadoop.hdfs.server.blockmanagement;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertArrayEquals;
 
 import java.util.ArrayList;
 
@@ -202,7 +203,7 @@ public class TestBlocksWithNotEnoughRacks {
       final FileSystem fs = cluster.getFileSystem();
       
       DFSTestUtil.createFile(fs, filePath, fileLen, REPLICATION_FACTOR, 1L);
-      final String fileContent = DFSTestUtil.readFile(fs, filePath);
+      final byte[] fileContent = DFSTestUtil.readFileAsBytes(fs, filePath);
 
       ExtendedBlock b = DFSTestUtil.getFirstBlock(fs, filePath);
       DFSTestUtil.waitForReplication(cluster, b, 2, REPLICATION_FACTOR, 0);
@@ -224,9 +225,9 @@ public class TestBlocksWithNotEnoughRacks {
       // Ensure all replicas are valid (the corrupt replica may not
       // have been cleaned up yet).
       for (int i = 0; i < racks.length; i++) {
-        String blockContent = cluster.readBlockOnDataNode(i, b);
+        byte[] blockContent = cluster.readBlockOnDataNodeAsBytes(i, b);
         if (blockContent != null && i != dnToCorrupt) {
-          assertEquals("Corrupt replica", fileContent, blockContent);
+          assertArrayEquals("Corrupt replica", fileContent, blockContent);
         }
       }
     } finally {


[22/43] hadoop git commit: HDFS-7439. Add BlockOpResponseProto's message to the exception messages. Contributed by Takanobu Asanuma

Posted by zj...@apache.org.
HDFS-7439. Add BlockOpResponseProto's message to the exception messages.  Contributed by Takanobu Asanuma


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/67ed5934
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/67ed5934
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/67ed5934

Branch: refs/heads/YARN-2928
Commit: 67ed59348d638d56e6752ba2c71fdcd69567546d
Parents: dd9cd07
Author: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Authored: Mon Mar 2 15:03:58 2015 +0800
Committer: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Committed: Mon Mar 2 15:03:58 2015 +0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt     |  3 +++
 .../java/org/apache/hadoop/hdfs/DFSClient.java  | 26 ++++++--------------
 .../org/apache/hadoop/hdfs/DFSOutputStream.java | 15 ++++-------
 .../apache/hadoop/hdfs/RemoteBlockReader2.java  | 24 ++++++------------
 .../datatransfer/DataTransferProtoUtil.java     | 26 ++++++++++++++++++++
 .../hadoop/hdfs/server/balancer/Dispatcher.java |  9 +++----
 .../hdfs/server/datanode/DataXceiver.java       | 14 +++--------
 7 files changed, 55 insertions(+), 62 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index ce35ea2..5ca16af 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -694,6 +694,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to
     hdfs-default.xml. (aajisaka)
 
+    HDFS-7439. Add BlockOpResponseProto's message to the exception messages.
+    (Takanobu Asanuma via szetszwo)
+
   OPTIMIZATIONS
 
     HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
index 792c2dd..abcd847 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
@@ -174,6 +174,7 @@ import org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException;
 import org.apache.hadoop.hdfs.protocol.SnapshotDiffReport;
 import org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus;
 import org.apache.hadoop.hdfs.protocol.UnresolvedPathException;
+import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil;
 import org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair;
 import org.apache.hadoop.hdfs.protocol.datatransfer.Op;
 import org.apache.hadoop.hdfs.protocol.datatransfer.ReplaceDatanodeOnFailure;
@@ -2260,15 +2261,9 @@ public class DFSClient implements java.io.Closeable, RemotePeerFactory,
           final BlockOpResponseProto reply =
             BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
 
-          if (reply.getStatus() != Status.SUCCESS) {
-            if (reply.getStatus() == Status.ERROR_ACCESS_TOKEN) {
-              throw new InvalidBlockTokenException();
-            } else {
-              throw new IOException("Bad response " + reply + " for block "
-                  + block + " from datanode " + datanodes[j]);
-            }
-          }
-          
+          String logInfo = "for block " + block + " from datanode " + datanodes[j];
+          DataTransferProtoUtil.checkBlockOpStatus(reply, logInfo);
+
           OpBlockChecksumResponseProto checksumData =
             reply.getChecksumResponse();
 
@@ -2425,16 +2420,9 @@ public class DFSClient implements java.io.Closeable, RemotePeerFactory,
           0, 1, true, CachingStrategy.newDefaultStrategy());
       final BlockOpResponseProto reply =
           BlockOpResponseProto.parseFrom(PBHelper.vintPrefixed(in));
-      
-      if (reply.getStatus() != Status.SUCCESS) {
-        if (reply.getStatus() == Status.ERROR_ACCESS_TOKEN) {
-          throw new InvalidBlockTokenException();
-        } else {
-          throw new IOException("Bad response " + reply + " trying to read "
-              + lb.getBlock() + " from datanode " + dn);
-        }
-      }
-      
+      String logInfo = "trying to read " + lb.getBlock() + " from datanode " + dn;
+      DataTransferProtoUtil.checkBlockOpStatus(reply, logInfo);
+
       return PBHelper.convert(reply.getReadOpChecksumInfo().getChecksum().getType());
     } finally {
       IOUtils.cleanup(null, pair.in, pair.out);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
index b3e8c97..dc2f674 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
@@ -69,6 +69,7 @@ import org.apache.hadoop.hdfs.protocol.SnapshotAccessControlException;
 import org.apache.hadoop.hdfs.protocol.UnresolvedPathException;
 import org.apache.hadoop.hdfs.protocol.datatransfer.BlockConstructionStage;
 import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtocol;
+import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil;
 import org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair;
 import org.apache.hadoop.hdfs.protocol.datatransfer.InvalidEncryptionKeyException;
 import org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader;
@@ -1469,16 +1470,10 @@ public class DFSOutputStream extends FSOutputSummer
             checkRestart = true;
             throw new IOException("A datanode is restarting.");
           }
-          if (pipelineStatus != SUCCESS) {
-            if (pipelineStatus == Status.ERROR_ACCESS_TOKEN) {
-              throw new InvalidBlockTokenException(
-                  "Got access token error for connect ack with firstBadLink as "
-                      + firstBadLink);
-            } else {
-              throw new IOException("Bad connect ack with firstBadLink as "
-                  + firstBadLink);
-            }
-          }
+
+          String logInfo = "ack with firstBadLink as " + firstBadLink;
+          DataTransferProtoUtil.checkBlockOpStatus(resp, logInfo);
+
           assert null == blockStream : "Previous blockStream unclosed";
           blockStream = out;
           result =  true; // success

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
index 3f133b6..9245a84 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
@@ -45,7 +45,6 @@ import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ReadOpChecksumIn
 import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.Status;
 import org.apache.hadoop.hdfs.protocolPB.PBHelper;
 import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
-import org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException;
 import org.apache.hadoop.hdfs.server.datanode.CachingStrategy;
 import org.apache.hadoop.hdfs.shortcircuit.ClientMmap;
 import org.apache.hadoop.net.NetUtils;
@@ -448,22 +447,13 @@ public class RemoteBlockReader2  implements BlockReader {
       BlockOpResponseProto status, Peer peer,
       ExtendedBlock block, String file)
       throws IOException {
-    if (status.getStatus() != Status.SUCCESS) {
-      if (status.getStatus() == Status.ERROR_ACCESS_TOKEN) {
-        throw new InvalidBlockTokenException(
-            "Got access token error for OP_READ_BLOCK, self="
-                + peer.getLocalAddressString() + ", remote="
-                + peer.getRemoteAddressString() + ", for file " + file
-                + ", for pool " + block.getBlockPoolId() + " block " 
-                + block.getBlockId() + "_" + block.getGenerationStamp());
-      } else {
-        throw new IOException("Got error for OP_READ_BLOCK, self="
-            + peer.getLocalAddressString() + ", remote="
-            + peer.getRemoteAddressString() + ", for file " + file
-            + ", for pool " + block.getBlockPoolId() + " block " 
-            + block.getBlockId() + "_" + block.getGenerationStamp());
-      }
-    }
+    String logInfo = "for OP_READ_BLOCK"
+      + ", self=" + peer.getLocalAddressString()
+      + ", remote=" + peer.getRemoteAddressString()
+      + ", for file " + file
+      + ", for pool " + block.getBlockPoolId()
+      + " block " + block.getBlockId() + "_" + block.getGenerationStamp();
+    DataTransferProtoUtil.checkBlockOpStatus(status, logInfo);
   }
   
   @Override

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
index 2ef3c3f..284281a 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/DataTransferProtoUtil.java
@@ -17,11 +17,16 @@
  */
 package org.apache.hadoop.hdfs.protocol.datatransfer;
 
+import java.io.IOException;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.hdfs.net.Peer;
+import org.apache.hadoop.hdfs.protocol.DatanodeID;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BaseHeaderProto;
+import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.BlockOpResponseProto;
+import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.Status;
 import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ChecksumProto;
 import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.ClientOperationHeaderProto;
 import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto;
@@ -29,6 +34,7 @@ import org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.DataTransferTrac
 import org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.ChecksumTypeProto;
 import org.apache.hadoop.hdfs.protocolPB.PBHelper;
 import org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier;
+import org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.util.DataChecksum;
 import org.apache.htrace.Span;
@@ -119,4 +125,24 @@ public abstract class DataTransferProtoUtil {
     }
     return scope;
   }
+
+  public static void checkBlockOpStatus(
+          BlockOpResponseProto response,
+          String logInfo) throws IOException {
+    if (response.getStatus() != Status.SUCCESS) {
+      if (response.getStatus() == Status.ERROR_ACCESS_TOKEN) {
+        throw new InvalidBlockTokenException(
+          "Got access token error"
+          + ", status message " + response.getMessage()
+          + ", " + logInfo
+        );
+      } else {
+        throw new IOException(
+          "Got error"
+          + ", status message " + response.getMessage()
+          + ", " + logInfo
+        );
+      }
+    }
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
index fa17cac..a3fd251 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Dispatcher.java
@@ -53,6 +53,7 @@ import org.apache.hadoop.hdfs.protocol.Block;
 import org.apache.hadoop.hdfs.protocol.DatanodeInfo;
 import org.apache.hadoop.hdfs.protocol.ExtendedBlock;
 import org.apache.hadoop.hdfs.protocol.HdfsConstants;
+import org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil;
 import org.apache.hadoop.hdfs.protocol.datatransfer.IOStreamPair;
 import org.apache.hadoop.hdfs.protocol.datatransfer.Sender;
 import org.apache.hadoop.hdfs.protocol.datatransfer.TrustedChannelResolver;
@@ -357,12 +358,8 @@ public class Dispatcher {
         // read intermediate responses
         response = BlockOpResponseProto.parseFrom(vintPrefixed(in));
       }
-      if (response.getStatus() != Status.SUCCESS) {
-        if (response.getStatus() == Status.ERROR_ACCESS_TOKEN) {
-          throw new IOException("block move failed due to access token error");
-        }
-        throw new IOException("block move is failed: " + response.getMessage());
-      }
+      String logInfo = "block move is failed";
+      DataTransferProtoUtil.checkBlockOpStatus(response, logInfo);
     }
 
     /** reset the object */

http://git-wip-us.apache.org/repos/asf/hadoop/blob/67ed5934/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
index 6a2250f..e9547a8 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java
@@ -1116,16 +1116,10 @@ class DataXceiver extends Receiver implements Runnable {
         BlockOpResponseProto copyResponse = BlockOpResponseProto.parseFrom(
             PBHelper.vintPrefixed(proxyReply));
         
-        if (copyResponse.getStatus() != SUCCESS) {
-          if (copyResponse.getStatus() == ERROR_ACCESS_TOKEN) {
-            throw new IOException("Copy block " + block + " from "
-                + proxySock.getRemoteSocketAddress()
-                + " failed due to access token error");
-          }
-          throw new IOException("Copy block " + block + " from "
-              + proxySock.getRemoteSocketAddress() + " failed");
-        }
-        
+        String logInfo = "copy block " + block + " from "
+            + proxySock.getRemoteSocketAddress();
+        DataTransferProtoUtil.checkBlockOpStatus(copyResponse, logInfo);
+
         // get checksum info about the block we're copying
         ReadOpChecksumInfoProto checksumInfo = copyResponse.getReadOpChecksumInfo();
         DataChecksum remoteChecksum = DataTransferProtoUtil.fromProto(


[32/43] hadoop git commit: HDFS-7871. NameNodeEditLogRoller can keep printing 'Swallowing exception' message. Contributed by Jing Zhao.

Posted by zj...@apache.org.
HDFS-7871. NameNodeEditLogRoller can keep printing 'Swallowing exception' message. Contributed by Jing Zhao.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b442aeec
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b442aeec
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b442aeec

Branch: refs/heads/YARN-2928
Commit: b442aeec95abfa1c6f835a116dfe6e186b0d841d
Parents: b18d383
Author: Jing Zhao <ji...@apache.org>
Authored: Mon Mar 2 20:22:04 2015 -0800
Committer: Jing Zhao <ji...@apache.org>
Committed: Mon Mar 2 20:22:04 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt                  | 3 +++
 .../org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java | 8 +++++---
 2 files changed, 8 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b442aeec/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 52e5d3c..fe78097 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -1071,6 +1071,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-7785. Improve diagnostics information for HttpPutFailedException.
     (Chengbing Liu via wheat9)
 
+    HDFS-7871. NameNodeEditLogRoller can keep printing "Swallowing exception"
+    message. (jing9)
+
     BREAKDOWN OF HDFS-7584 SUBTASKS AND RELATED JIRAS
 
       HDFS-7720. Quota by Storage Type API, tools and ClientNameNode

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b442aeec/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
index 7cd194e..d2b48f3 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
@@ -4558,14 +4558,16 @@ public class FSNamesystem implements Namesystem, FSNamesystemMBean,
                 + rollThreshold);
             rollEditLog();
           }
+        } catch (Exception e) {
+          FSNamesystem.LOG.error("Swallowing exception in "
+              + NameNodeEditLogRoller.class.getSimpleName() + ":", e);
+        }
+        try {
           Thread.sleep(sleepIntervalMs);
         } catch (InterruptedException e) {
           FSNamesystem.LOG.info(NameNodeEditLogRoller.class.getSimpleName()
               + " was interrupted, exiting");
           break;
-        } catch (Exception e) {
-          FSNamesystem.LOG.error("Swallowing exception in "
-              + NameNodeEditLogRoller.class.getSimpleName() + ":", e);
         }
       }
     }


[02/43] hadoop git commit: recommit "HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir." (cherry picked from commit 7c6b6547eeed110e1a842e503bfd33afe04fa814)

Posted by zj...@apache.org.
recommit "HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir."
(cherry picked from commit 7c6b6547eeed110e1a842e503bfd33afe04fa814)

Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/cf51ff2f
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/cf51ff2f
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/cf51ff2f

Branch: refs/heads/YARN-2928
Commit: cf51ff2fe8f0f08060dd1a9d96dac0c032277f77
Parents: 8719cdd
Author: Tsz-Wo Nicholas Sze <sz...@hortonworks.com>
Authored: Tue Feb 10 17:48:57 2015 -0800
Committer: Konstantin V Shvachko <sh...@apache.org>
Committed: Fri Feb 27 14:30:41 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt                      | 3 +++
 .../hadoop-hdfs/src/test/resources/testHDFSConf.xml              | 4 ++--
 2 files changed, 5 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf51ff2f/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index b4b0087..2a8da43 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -981,6 +981,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-7714. Simultaneous restart of HA NameNodes and DataNode can cause
     DataNode to register successfully with only one NameNode.(vinayakumarb)
 
+    HDFS-7769. TestHDFSCLI should not create files in hdfs project root dir.
+    (szetszwo)
+
     HDFS-7753. Fix Multithreaded correctness Warnings in BackupImage.
     (Rakesh R and shv)
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/cf51ff2f/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
index e59b05a..2d3de1f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml
@@ -16483,8 +16483,8 @@
         <command>-fs NAMENODE -mkdir -p /user/USERNAME/dir1</command>
         <command>-fs NAMENODE -copyFromLocal CLITEST_DATA/data15bytes /user/USERNAME/dir1</command>
         <command>-fs NAMENODE -copyFromLocal CLITEST_DATA/data30bytes /user/USERNAME/dir1</command>
-        <command>-fs NAMENODE -getmerge /user/USERNAME/dir1 data</command>
-        <command>-cat data</command>
+        <command>-fs NAMENODE -getmerge /user/USERNAME/dir1 CLITEST_DATA/file</command>
+        <command>-cat CLITEST_DATA/file</command>
       </test-commands>
       <cleanup-commands>
         <command>-fs NAMENODE -rm -r /user/USERNAME</command>


[18/43] hadoop git commit: HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml (aajisaka)

Posted by zj...@apache.org.
HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to hdfs-default.xml (aajisaka)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/aa55fd30
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/aa55fd30
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/aa55fd30

Branch: refs/heads/YARN-2928
Commit: aa55fd3096442f186aebc5a767d7e271b7224b51
Parents: dbc9b64
Author: Akira Ajisaka <aa...@apache.org>
Authored: Sun Mar 1 01:16:36 2015 -0800
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Sun Mar 1 01:16:36 2015 -0800

----------------------------------------------------------------------
 hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt              |  3 +++
 .../hadoop-hdfs/src/main/resources/hdfs-default.xml      | 11 +++++++++++
 2 files changed, 14 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa55fd30/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
index 16fe394..ce35ea2 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
+++ b/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
@@ -691,6 +691,9 @@ Release 2.7.0 - UNRELEASED
     HDFS-7685. Document dfs.namenode.heartbeat.recheck-interval in
     hdfs-default.xml. (Kai Sasaki via aajisaka)
 
+    HDFS-5853. Add "hadoop.user.group.metrics.percentiles.intervals" to
+    hdfs-default.xml. (aajisaka)
+
   OPTIMIZATIONS
 
     HDFS-7454. Reduce memory footprint for AclEntries in NameNode.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/aa55fd30/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
----------------------------------------------------------------------
diff --git a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
index 66fe86c..7eacfc5 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
@@ -1518,6 +1518,17 @@
 </property>
 
 <property>
+  <name>hadoop.user.group.metrics.percentiles.intervals</name>
+  <value></value>
+  <description>
+    A comma-separated list of the granularity in seconds for the metrics
+    which describe the 50/75/90/95/99th percentile latency for group resolution
+    in milliseconds.
+    By default, percentile latency metrics are disabled.
+  </description>
+</property>
+
+<property>
   <name>dfs.encrypt.data.transfer</name>
   <value>false</value>
   <description>


[43/43] hadoop git commit: YARN-3210. Refactored timeline aggregator according to new code organization proposed in YARN-3166. Contributed by Li Lu.

Posted by zj...@apache.org.
YARN-3210. Refactored timeline aggregator according to new code organization proposed in YARN-3166. Contributed by Li Lu.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/d3ff7f06
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/d3ff7f06
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/d3ff7f06

Branch: refs/heads/YARN-2928
Commit: d3ff7f06cbc66d3a23c2551e7d4c752689f46afe
Parents: e4d81eb
Author: Zhijie Shen <zj...@apache.org>
Authored: Tue Mar 3 11:21:03 2015 -0800
Committer: Zhijie Shen <zj...@apache.org>
Committed: Tue Mar 3 11:25:17 2015 -0800

----------------------------------------------------------------------
 hadoop-yarn-project/CHANGES.txt                 |   3 +
 .../distributedshell/TestDistributedShell.java  |   4 +-
 .../hadoop-yarn-server-nodemanager/pom.xml      |   5 -
 .../server/nodemanager/webapp/WebServer.java    |   3 -
 .../TestTimelineServiceClientIntegration.java   |  12 +-
 .../aggregator/AppLevelAggregatorService.java   |  57 ----
 .../aggregator/AppLevelServiceManager.java      | 136 ----------
 .../AppLevelServiceManagerProvider.java         |  33 ---
 .../aggregator/AppLevelTimelineAggregator.java  |  57 ++++
 .../aggregator/BaseAggregatorService.java       | 107 --------
 .../aggregator/PerNodeAggregatorServer.java     | 268 -------------------
 .../aggregator/PerNodeAggregatorWebService.java | 180 -------------
 .../PerNodeTimelineAggregatorsAuxService.java   | 212 +++++++++++++++
 .../aggregator/TimelineAggregator.java          | 107 ++++++++
 .../TimelineAggregatorWebService.java           | 180 +++++++++++++
 .../TimelineAggregatorsCollection.java          | 203 ++++++++++++++
 .../TestAppLevelAggregatorService.java          |  23 --
 .../aggregator/TestAppLevelServiceManager.java  | 102 -------
 .../TestAppLevelTimelineAggregator.java         |  23 ++
 .../aggregator/TestBaseAggregatorService.java   |  23 --
 .../aggregator/TestPerNodeAggregatorServer.java | 149 -----------
 ...estPerNodeTimelineAggregatorsAuxService.java | 150 +++++++++++
 .../aggregator/TestTimelineAggregator.java      |  23 ++
 .../TestTimelineAggregatorsCollection.java      | 108 ++++++++
 24 files changed, 1074 insertions(+), 1094 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/CHANGES.txt b/hadoop-yarn-project/CHANGES.txt
index b13475a..0548460 100644
--- a/hadoop-yarn-project/CHANGES.txt
+++ b/hadoop-yarn-project/CHANGES.txt
@@ -23,6 +23,9 @@ Branch YARN-2928: Timeline Server Next Generation: Phase 1
     YARN-3125. Made the distributed shell use timeline service next gen and
     add an integration test for it. (Junping Du and Li Lu via zjshen)
 
+    YARN-3210. Refactored timeline aggregator according to new code
+    organization proposed in YARN-3166. (Li Lu via zjshen)
+
   IMPROVEMENTS
 
   OPTIMIZATIONS

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
index 71466cb..313dc97 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java
@@ -49,7 +49,7 @@ import org.apache.hadoop.yarn.client.api.YarnClient;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.server.MiniYARNCluster;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler;
-import org.apache.hadoop.yarn.server.timelineservice.aggregator.PerNodeAggregatorServer;
+import org.apache.hadoop.yarn.server.timelineservice.aggregator.PerNodeTimelineAggregatorsAuxService;
 import org.junit.After;
 import org.junit.Assert;
 import org.junit.Before;
@@ -96,7 +96,7 @@ public class TestDistributedShell {
       // enable aux-service based timeline aggregators
       conf.set(YarnConfiguration.NM_AUX_SERVICES, TIMELINE_AUX_SERVICE_NAME);
       conf.set(YarnConfiguration.NM_AUX_SERVICES + "." + TIMELINE_AUX_SERVICE_NAME
-        + ".class", PerNodeAggregatorServer.class.getName());
+        + ".class", PerNodeTimelineAggregatorsAuxService.class.getName());
     }
     conf.set(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class.getName());
     conf.setBoolean(YarnConfiguration.NODE_LABELS_ENABLED, true);

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
index 26a33b4..b1efa5f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/pom.xml
@@ -53,11 +53,6 @@
       <artifactId>hadoop-yarn-api</artifactId>
     </dependency>
     <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-yarn-server-timelineservice</artifactId>
-      <version>${project.version}</version>
-    </dependency>
-    <dependency>
       <groupId>javax.xml.bind</groupId>
       <artifactId>jaxb-api</artifactId>
     </dependency>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
index 77deaed..fdff480 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java
@@ -29,9 +29,6 @@ import org.apache.hadoop.yarn.server.nodemanager.Context;
 import org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService;
 import org.apache.hadoop.yarn.server.nodemanager.ResourceView;
 import org.apache.hadoop.yarn.server.security.ApplicationACLsManager;
-import org.apache.hadoop.yarn.server.timelineservice.aggregator.AppLevelServiceManager;
-import org.apache.hadoop.yarn.server.timelineservice.aggregator.AppLevelServiceManagerProvider;
-import org.apache.hadoop.yarn.server.timelineservice.aggregator.PerNodeAggregatorWebService;
 import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
 import org.apache.hadoop.yarn.webapp.WebApp;
 import org.apache.hadoop.yarn.webapp.WebApps;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java
index a5159a2..32ee5d8 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/timelineservice/TestTimelineServiceClientIntegration.java
@@ -6,7 +6,7 @@ import org.apache.hadoop.yarn.api.records.ApplicationId;
 import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntity;
 import org.apache.hadoop.yarn.client.api.TimelineClient;
 import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.server.timelineservice.aggregator.PerNodeAggregatorServer;
+import org.apache.hadoop.yarn.server.timelineservice.aggregator.PerNodeTimelineAggregatorsAuxService;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
@@ -14,13 +14,13 @@ import org.junit.Test;
 import static org.junit.Assert.fail;
 
 public class TestTimelineServiceClientIntegration {
-  private static PerNodeAggregatorServer server;
+  private static PerNodeTimelineAggregatorsAuxService auxService;
 
   @BeforeClass
   public static void setupClass() throws Exception {
     try {
-      server = PerNodeAggregatorServer.launchServer(new String[0]);
-      server.addApplication(ApplicationId.newInstance(0, 1));
+      auxService = PerNodeTimelineAggregatorsAuxService.launchServer(new String[0]);
+      auxService.addApplication(ApplicationId.newInstance(0, 1));
     } catch (ExitUtil.ExitException e) {
       fail();
     }
@@ -28,8 +28,8 @@ public class TestTimelineServiceClientIntegration {
 
   @AfterClass
   public static void tearDownClass() throws Exception {
-    if (server != null) {
-      server.stop();
+    if (auxService != null) {
+      auxService.stop();
     }
   }
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelAggregatorService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelAggregatorService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelAggregatorService.java
deleted file mode 100644
index bf72fb9..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelAggregatorService.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.conf.Configuration;
-
-/**
- * Service that handles writes to the timeline service and writes them to the
- * backing storage for a given YARN application.
- *
- * App-related lifecycle management is handled by this service.
- */
-@Private
-@Unstable
-public class AppLevelAggregatorService extends BaseAggregatorService {
-  private final String applicationId;
-  // TODO define key metadata such as flow metadata, user, and queue
-
-  public AppLevelAggregatorService(String applicationId) {
-    super(AppLevelAggregatorService.class.getName() + " - " + applicationId);
-    this.applicationId = applicationId;
-  }
-
-  @Override
-  protected void serviceInit(Configuration conf) throws Exception {
-    super.serviceInit(conf);
-  }
-
-  @Override
-  protected void serviceStart() throws Exception {
-    super.serviceStart();
-  }
-
-  @Override
-  protected void serviceStop() throws Exception {
-    super.serviceStop();
-  }
-
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManager.java
deleted file mode 100644
index 05d321f..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManager.java
+++ /dev/null
@@ -1,136 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.Map;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.service.CompositeService;
-import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
-
-/**
- * Class that manages adding and removing app level aggregator services and
- * their lifecycle. It provides thread safety access to the app level services.
- *
- * It is a singleton, and instances should be obtained via
- * {@link #getInstance()}.
- */
-@Private
-@Unstable
-public class AppLevelServiceManager extends CompositeService {
-  private static final Log LOG =
-      LogFactory.getLog(AppLevelServiceManager.class);
-  private static final AppLevelServiceManager INSTANCE =
-      new AppLevelServiceManager();
-
-  // access to this map is synchronized with the map itself
-  private final Map<String,AppLevelAggregatorService> services =
-      Collections.synchronizedMap(
-          new HashMap<String,AppLevelAggregatorService>());
-
-  static AppLevelServiceManager getInstance() {
-    return INSTANCE;
-  }
-
-  AppLevelServiceManager() {
-    super(AppLevelServiceManager.class.getName());
-  }
-
-  /**
-   * Creates and adds an app level aggregator service for the specified
-   * application id. The service is also initialized and started. If the service
-   * already exists, no new service is created.
-   *
-   * @throws YarnRuntimeException if there was any exception in initializing and
-   * starting the app level service
-   * @return whether it was added successfully
-   */
-  public boolean addService(String appId) {
-    synchronized (services) {
-      AppLevelAggregatorService service = services.get(appId);
-      if (service == null) {
-        try {
-          service = new AppLevelAggregatorService(appId);
-          // initialize, start, and add it to the parent service so it can be
-          // cleaned up when the parent shuts down
-          service.init(getConfig());
-          service.start();
-          services.put(appId, service);
-          LOG.info("the application aggregator service for " + appId +
-              " was added");
-          return true;
-        } catch (Exception e) {
-          throw new YarnRuntimeException(e);
-        }
-      } else {
-        String msg = "the application aggregator service for " + appId +
-            " already exists!";
-        LOG.error(msg);
-        return false;
-      }
-    }
-  }
-
-  /**
-   * Removes the app level aggregator service for the specified application id.
-   * The service is also stopped as a result. If the service does not exist, no
-   * change is made.
-   *
-   * @return whether it was removed successfully
-   */
-  public boolean removeService(String appId) {
-    synchronized (services) {
-      AppLevelAggregatorService service = services.remove(appId);
-      if (service == null) {
-        String msg = "the application aggregator service for " + appId +
-            " does not exist!";
-        LOG.error(msg);
-        return false;
-      } else {
-        // stop the service to do clean up
-        service.stop();
-        LOG.info("the application aggregator service for " + appId +
-            " was removed");
-        return true;
-      }
-    }
-  }
-
-  /**
-   * Returns the app level aggregator service for the specified application id.
-   *
-   * @return the app level aggregator service or null if it does not exist
-   */
-  public AppLevelAggregatorService getService(String appId) {
-    return services.get(appId);
-  }
-
-  /**
-   * Returns whether the app level aggregator service for the specified
-   * application id exists.
-   */
-  public boolean hasService(String appId) {
-    return services.containsKey(appId);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManagerProvider.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManagerProvider.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManagerProvider.java
deleted file mode 100644
index 8768575..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelServiceManagerProvider.java
+++ /dev/null
@@ -1,33 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import com.google.inject.Provider;
-
-/**
- * A guice provider that provides a global singleton instance of
- * AppLevelServiceManager.
- */
-public class AppLevelServiceManagerProvider
-    implements Provider<AppLevelServiceManager> {
-  @Override
-  public AppLevelServiceManager get() {
-    return AppLevelServiceManager.getInstance();
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelTimelineAggregator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelTimelineAggregator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelTimelineAggregator.java
new file mode 100644
index 0000000..95ec9f8
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/AppLevelTimelineAggregator.java
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.conf.Configuration;
+
+/**
+ * Service that handles writes to the timeline service and writes them to the
+ * backing storage for a given YARN application.
+ *
+ * App-related lifecycle management is handled by this service.
+ */
+@Private
+@Unstable
+public class AppLevelTimelineAggregator extends TimelineAggregator {
+  private final String applicationId;
+  // TODO define key metadata such as flow metadata, user, and queue
+
+  public AppLevelTimelineAggregator(String applicationId) {
+    super(AppLevelTimelineAggregator.class.getName() + " - " + applicationId);
+    this.applicationId = applicationId;
+  }
+
+  @Override
+  protected void serviceInit(Configuration conf) throws Exception {
+    super.serviceInit(conf);
+  }
+
+  @Override
+  protected void serviceStart() throws Exception {
+    super.serviceStart();
+  }
+
+  @Override
+  protected void serviceStop() throws Exception {
+    super.serviceStop();
+  }
+
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/BaseAggregatorService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/BaseAggregatorService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/BaseAggregatorService.java
deleted file mode 100644
index e362139..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/BaseAggregatorService.java
+++ /dev/null
@@ -1,107 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.service.CompositeService;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
-
-/**
- * Service that handles writes to the timeline service and writes them to the
- * backing storage.
- *
- * Classes that extend this can add their own lifecycle management or
- * customization of request handling.
- */
-@Private
-@Unstable
-public class BaseAggregatorService extends CompositeService {
-  private static final Log LOG = LogFactory.getLog(BaseAggregatorService.class);
-
-  public BaseAggregatorService(String name) {
-    super(name);
-  }
-
-  @Override
-  protected void serviceInit(Configuration conf) throws Exception {
-    super.serviceInit(conf);
-  }
-
-  @Override
-  protected void serviceStart() throws Exception {
-    super.serviceStart();
-  }
-
-  @Override
-  protected void serviceStop() throws Exception {
-    super.serviceStop();
-  }
-
-  /**
-   * Handles entity writes. These writes are synchronous and are written to the
-   * backing storage without buffering/batching. If any entity already exists,
-   * it results in an update of the entity.
-   *
-   * This method should be reserved for selected critical entities and events.
-   * For normal voluminous writes one should use the async method
-   * {@link #postEntitiesAsync(TimelineEntities, UserGroupInformation)}.
-   *
-   * @param entities entities to post
-   * @param callerUgi the caller UGI
-   */
-  public void postEntities(TimelineEntities entities,
-      UserGroupInformation callerUgi) {
-    // Add this output temporarily for our prototype
-    // TODO remove this after we have an actual implementation
-    LOG.info("SUCCESS - TIMELINE V2 PROTOTYPE");
-    LOG.info("postEntities(entities=" + entities + ", callerUgi=" +
-        callerUgi + ")");
-
-    // TODO implement
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("postEntities(entities=" + entities + ", callerUgi=" +
-          callerUgi + ")");
-    }
-  }
-
-  /**
-   * Handles entity writes in an asynchronous manner. The method returns as soon
-   * as validation is done. No promises are made on how quickly it will be
-   * written to the backing storage or if it will always be written to the
-   * backing storage. Multiple writes to the same entities may be batched and
-   * appropriate values updated and result in fewer writes to the backing
-   * storage.
-   *
-   * @param entities entities to post
-   * @param callerUgi the caller UGI
-   */
-  public void postEntitiesAsync(TimelineEntities entities,
-      UserGroupInformation callerUgi) {
-    // TODO implement
-    if (LOG.isDebugEnabled()) {
-      LOG.debug("postEntitiesAsync(entities=" + entities + ", callerUgi=" +
-          callerUgi + ")");
-    }
-  }
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorServer.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorServer.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorServer.java
deleted file mode 100644
index deb21c7..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorServer.java
+++ /dev/null
@@ -1,268 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import java.net.URI;
-import java.nio.ByteBuffer;
-import java.util.HashMap;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.http.lib.StaticUserWebFilter;
-import org.apache.hadoop.util.ExitUtil;
-import org.apache.hadoop.util.ShutdownHookManager;
-import org.apache.hadoop.util.StringUtils;
-import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.ContainerId;
-import org.apache.hadoop.yarn.conf.YarnConfiguration;
-import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
-import org.apache.hadoop.yarn.server.api.ApplicationInitializationContext;
-import org.apache.hadoop.yarn.server.api.ApplicationTerminationContext;
-import org.apache.hadoop.yarn.server.api.AuxiliaryService;
-import org.apache.hadoop.yarn.server.api.ContainerContext;
-import org.apache.hadoop.yarn.server.api.ContainerInitializationContext;
-import org.apache.hadoop.yarn.server.api.ContainerTerminationContext;
-import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
-import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
-import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
-import org.apache.hadoop.http.HttpServer2;
-
-import com.google.common.annotations.VisibleForTesting;
-
-import static org.apache.hadoop.fs.CommonConfigurationKeys.DEFAULT_HADOOP_HTTP_STATIC_USER;
-import static org.apache.hadoop.fs.CommonConfigurationKeys.HADOOP_HTTP_STATIC_USER;
-
-/**
- * The top-level server for the per-node timeline aggregator service. Currently
- * it is defined as an auxiliary service to accommodate running within another
- * daemon (e.g. node manager).
- */
-@Private
-@Unstable
-public class PerNodeAggregatorServer extends AuxiliaryService {
-  private static final Log LOG =
-      LogFactory.getLog(PerNodeAggregatorServer.class);
-  private static final int SHUTDOWN_HOOK_PRIORITY = 30;
-  static final String AGGREGATOR_COLLECTION_ATTR_KEY = "aggregator.collection";
-
-  private final AppLevelServiceManager serviceManager;
-  private HttpServer2 timelineRestServer;
-
-  public PerNodeAggregatorServer() {
-    // use the same singleton
-    this(AppLevelServiceManager.getInstance());
-  }
-
-  @VisibleForTesting
-  PerNodeAggregatorServer(AppLevelServiceManager serviceManager) {
-    super("timeline_aggregator");
-    this.serviceManager = serviceManager;
-  }
-
-  @Override
-  protected void serviceInit(Configuration conf) throws Exception {
-    serviceManager.init(conf);
-    super.serviceInit(conf);
-  }
-
-  @Override
-  protected void serviceStart() throws Exception {
-    super.serviceStart();
-    serviceManager.start();
-    startWebApp();
-  }
-
-  @Override
-  protected void serviceStop() throws Exception {
-    if (timelineRestServer != null) {
-      timelineRestServer.stop();
-    }
-    // stop the service manager
-    serviceManager.stop();
-    super.serviceStop();
-  }
-
-  private void startWebApp() {
-    Configuration conf = getConfig();
-    // use the same ports as the old ATS for now; we could create new properties
-    // for the new timeline service if needed
-    String bindAddress = WebAppUtils.getWebAppBindURL(conf,
-                          YarnConfiguration.TIMELINE_SERVICE_BIND_HOST,
-                          WebAppUtils.getAHSWebAppURLWithoutScheme(conf));
-    LOG.info("Instantiating the per-node aggregator webapp at " + bindAddress);
-    try {
-      Configuration confForInfoServer = new Configuration(conf);
-      confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
-      HttpServer2.Builder builder = new HttpServer2.Builder()
-          .setName("timeline")
-          .setConf(conf)
-          .addEndpoint(URI.create("http://" + bindAddress));
-      timelineRestServer = builder.build();
-      // TODO: replace this by an authentication filter in future.
-      HashMap<String, String> options = new HashMap<>();
-      String username = conf.get(HADOOP_HTTP_STATIC_USER,
-          DEFAULT_HADOOP_HTTP_STATIC_USER);
-      options.put(HADOOP_HTTP_STATIC_USER, username);
-      HttpServer2.defineFilter(timelineRestServer.getWebAppContext(),
-          "static_user_filter_timeline",
-          StaticUserWebFilter.StaticUserFilter.class.getName(),
-          options, new String[] {"/*"});
-
-      timelineRestServer.addJerseyResourcePackage(
-          PerNodeAggregatorWebService.class.getPackage().getName() + ";"
-              + GenericExceptionHandler.class.getPackage().getName() + ";"
-              + YarnJacksonJaxbJsonProvider.class.getPackage().getName(),
-          "/*");
-      timelineRestServer.setAttribute(AGGREGATOR_COLLECTION_ATTR_KEY,
-          AppLevelServiceManager.getInstance());
-      timelineRestServer.start();
-    } catch (Exception e) {
-      String msg = "The per-node aggregator webapp failed to start.";
-      LOG.error(msg, e);
-      throw new YarnRuntimeException(msg, e);
-    }
-  }
-
-  // these methods can be used as the basis for future service methods if the
-  // per-node aggregator runs separate from the node manager
-  /**
-   * Creates and adds an app level aggregator service for the specified
-   * application id. The service is also initialized and started. If the service
-   * already exists, no new service is created.
-   *
-   * @return whether it was added successfully
-   */
-  public boolean addApplication(ApplicationId appId) {
-    String appIdString = appId.toString();
-    return serviceManager.addService(appIdString);
-  }
-
-  /**
-   * Removes the app level aggregator service for the specified application id.
-   * The service is also stopped as a result. If the service does not exist, no
-   * change is made.
-   *
-   * @return whether it was removed successfully
-   */
-  public boolean removeApplication(ApplicationId appId) {
-    String appIdString = appId.toString();
-    return serviceManager.removeService(appIdString);
-  }
-
-  /**
-   * Creates and adds an app level aggregator service for the specified
-   * application id. The service is also initialized and started. If the service
-   * already exists, no new service is created.
-   */
-  @Override
-  public void initializeContainer(ContainerInitializationContext context) {
-    // intercept the event of the AM container being created and initialize the
-    // app level aggregator service
-    if (isApplicationMaster(context)) {
-      ApplicationId appId = context.getContainerId().
-          getApplicationAttemptId().getApplicationId();
-      addApplication(appId);
-    }
-  }
-
-  /**
-   * Removes the app level aggregator service for the specified application id.
-   * The service is also stopped as a result. If the service does not exist, no
-   * change is made.
-   */
-  @Override
-  public void stopContainer(ContainerTerminationContext context) {
-    // intercept the event of the AM container being stopped and remove the app
-    // level aggregator service
-    if (isApplicationMaster(context)) {
-      ApplicationId appId = context.getContainerId().
-          getApplicationAttemptId().getApplicationId();
-      removeApplication(appId);
-    }
-  }
-
-  private boolean isApplicationMaster(ContainerContext context) {
-    // TODO this is based on a (shaky) assumption that the container id (the
-    // last field of the full container id) for an AM is always 1
-    // we want to make this much more reliable
-    ContainerId containerId = context.getContainerId();
-    return containerId.getContainerId() == 1L;
-  }
-
-  @VisibleForTesting
-  boolean hasApplication(String appId) {
-    return serviceManager.hasService(appId);
-  }
-
-  @Override
-  public void initializeApplication(ApplicationInitializationContext context) {
-  }
-
-  @Override
-  public void stopApplication(ApplicationTerminationContext context) {
-  }
-
-  @Override
-  public ByteBuffer getMetaData() {
-    // TODO currently it is not used; we can return a more meaningful data when
-    // we connect it with an AM
-    return ByteBuffer.allocate(0);
-  }
-
-  @VisibleForTesting
-  public static PerNodeAggregatorServer launchServer(String[] args) {
-    Thread
-      .setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
-    StringUtils.startupShutdownMessage(PerNodeAggregatorServer.class, args,
-        LOG);
-    PerNodeAggregatorServer server = null;
-    try {
-      server = new PerNodeAggregatorServer();
-      ShutdownHookManager.get().addShutdownHook(new ShutdownHook(server),
-          SHUTDOWN_HOOK_PRIORITY);
-      YarnConfiguration conf = new YarnConfiguration();
-      server.init(conf);
-      server.start();
-    } catch (Throwable t) {
-      LOG.fatal("Error starting PerNodeAggregatorServer", t);
-      ExitUtil.terminate(-1, "Error starting PerNodeAggregatorServer");
-    }
-    return server;
-  }
-
-  private static class ShutdownHook implements Runnable {
-    private final PerNodeAggregatorServer server;
-
-    public ShutdownHook(PerNodeAggregatorServer server) {
-      this.server = server;
-    }
-
-    public void run() {
-      server.stop();
-    }
-  }
-
-  public static void main(String[] args) {
-    launchServer(args);
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorWebService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorWebService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorWebService.java
deleted file mode 100644
index ffe099e..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeAggregatorWebService.java
+++ /dev/null
@@ -1,180 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import javax.servlet.ServletContext;
-import javax.servlet.http.HttpServletRequest;
-import javax.servlet.http.HttpServletResponse;
-import javax.ws.rs.*;
-import javax.ws.rs.core.Context;
-import javax.ws.rs.core.MediaType;
-import javax.ws.rs.core.Response;
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.classification.InterfaceAudience.Private;
-import org.apache.hadoop.classification.InterfaceAudience.Public;
-import org.apache.hadoop.classification.InterfaceStability.Unstable;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.hadoop.yarn.api.records.ApplicationId;
-import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
-import org.apache.hadoop.yarn.util.ConverterUtils;
-import org.apache.hadoop.yarn.webapp.ForbiddenException;
-import org.apache.hadoop.yarn.webapp.NotFoundException;
-
-import com.google.inject.Singleton;
-
-/**
- * The main per-node REST end point for timeline service writes. It is
- * essentially a container service that routes requests to the appropriate
- * per-app services.
- */
-@Private
-@Unstable
-@Singleton
-@Path("/ws/v2/timeline")
-public class PerNodeAggregatorWebService {
-  private static final Log LOG =
-      LogFactory.getLog(PerNodeAggregatorWebService.class);
-
-  private @Context ServletContext context;
-
-  @XmlRootElement(name = "about")
-  @XmlAccessorType(XmlAccessType.NONE)
-  @Public
-  @Unstable
-  public static class AboutInfo {
-
-    private String about;
-
-    public AboutInfo() {
-
-    }
-
-    public AboutInfo(String about) {
-      this.about = about;
-    }
-
-    @XmlElement(name = "About")
-    public String getAbout() {
-      return about;
-    }
-
-    public void setAbout(String about) {
-      this.about = about;
-    }
-
-  }
-
-  /**
-   * Return the description of the timeline web services.
-   */
-  @GET
-  @Produces({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public AboutInfo about(
-      @Context HttpServletRequest req,
-      @Context HttpServletResponse res) {
-    init(res);
-    return new AboutInfo("Timeline API");
-  }
-
-  /**
-   * Accepts writes to the aggregator, and returns a response. It simply routes
-   * the request to the app level aggregator. It expects an application as a
-   * context.
-   */
-  @PUT
-  @Path("/entities")
-  @Consumes({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
-  public Response putEntities(
-      @Context HttpServletRequest req,
-      @Context HttpServletResponse res,
-      @QueryParam("async") String async,
-      @QueryParam("appid") String appId,
-      TimelineEntities entities) {
-    init(res);
-    UserGroupInformation callerUgi = getUser(req);
-    if (callerUgi == null) {
-      String msg = "The owner of the posted timeline entities is not set";
-      LOG.error(msg);
-      throw new ForbiddenException(msg);
-    }
-
-    // TODO how to express async posts and handle them
-    boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
-
-    try {
-      appId = parseApplicationId(appId);
-      if (appId == null) {
-        return Response.status(Response.Status.BAD_REQUEST).build();
-      }
-      AppLevelAggregatorService service = getAggregatorService(req, appId);
-      if (service == null) {
-        LOG.error("Application not found");
-        throw new NotFoundException(); // different exception?
-      }
-      service.postEntities(entities, callerUgi);
-      return Response.ok().build();
-    } catch (Exception e) {
-      LOG.error("Error putting entities", e);
-      throw new WebApplicationException(e,
-          Response.Status.INTERNAL_SERVER_ERROR);
-    }
-  }
-
-  private String parseApplicationId(String appId) {
-    // Make sure the appId is not null and is valid
-    ApplicationId appID;
-    try {
-      if (appId != null) {
-        return ConverterUtils.toApplicationId(appId.trim()).toString();
-      } else {
-        return null;
-      }
-    } catch (Exception e) {
-      return null;
-    }
-  }
-
-  private AppLevelAggregatorService
-      getAggregatorService(HttpServletRequest req, String appIdToParse) {
-    String appIdString = parseApplicationId(appIdToParse);
-    final AppLevelServiceManager serviceManager =
-        (AppLevelServiceManager) context.getAttribute(
-            PerNodeAggregatorServer.AGGREGATOR_COLLECTION_ATTR_KEY);
-    return serviceManager.getService(appIdString);
-  }
-
-  private void init(HttpServletResponse response) {
-    response.setContentType(null);
-  }
-
-  private UserGroupInformation getUser(HttpServletRequest req) {
-    String remoteUser = req.getRemoteUser();
-    UserGroupInformation callerUgi = null;
-    if (remoteUser != null) {
-      callerUgi = UserGroupInformation.createRemoteUser(remoteUser);
-    }
-    return callerUgi;
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeTimelineAggregatorsAuxService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeTimelineAggregatorsAuxService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeTimelineAggregatorsAuxService.java
new file mode 100644
index 0000000..cdc4e35
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/PerNodeTimelineAggregatorsAuxService.java
@@ -0,0 +1,212 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import java.nio.ByteBuffer;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.util.ExitUtil;
+import org.apache.hadoop.util.ShutdownHookManager;
+import org.apache.hadoop.util.StringUtils;
+import org.apache.hadoop.yarn.YarnUncaughtExceptionHandler;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.server.api.ApplicationInitializationContext;
+import org.apache.hadoop.yarn.server.api.ApplicationTerminationContext;
+import org.apache.hadoop.yarn.server.api.AuxiliaryService;
+import org.apache.hadoop.yarn.server.api.ContainerContext;
+import org.apache.hadoop.yarn.server.api.ContainerInitializationContext;
+import org.apache.hadoop.yarn.server.api.ContainerTerminationContext;
+
+import com.google.common.annotations.VisibleForTesting;
+
+/**
+ * The top-level server for the per-node timeline aggregator collection. Currently
+ * it is defined as an auxiliary service to accommodate running within another
+ * daemon (e.g. node manager).
+ */
+@Private
+@Unstable
+public class PerNodeTimelineAggregatorsAuxService extends AuxiliaryService {
+  private static final Log LOG =
+      LogFactory.getLog(PerNodeTimelineAggregatorsAuxService.class);
+  private static final int SHUTDOWN_HOOK_PRIORITY = 30;
+
+  private final TimelineAggregatorsCollection aggregatorCollection;
+
+  public PerNodeTimelineAggregatorsAuxService() {
+    // use the same singleton
+    this(TimelineAggregatorsCollection.getInstance());
+  }
+
+  @VisibleForTesting PerNodeTimelineAggregatorsAuxService(
+      TimelineAggregatorsCollection aggregatorCollection) {
+    super("timeline_aggregator");
+    this.aggregatorCollection = aggregatorCollection;
+  }
+
+  @Override
+  protected void serviceInit(Configuration conf) throws Exception {
+    aggregatorCollection.init(conf);
+    super.serviceInit(conf);
+  }
+
+  @Override
+  protected void serviceStart() throws Exception {
+    aggregatorCollection.start();
+    super.serviceStart();
+  }
+
+  @Override
+  protected void serviceStop() throws Exception {
+    aggregatorCollection.stop();
+    super.serviceStop();
+  }
+
+  // these methods can be used as the basis for future service methods if the
+  // per-node aggregator runs separate from the node manager
+  /**
+   * Creates and adds an app level aggregator for the specified application id.
+   * The aggregator is also initialized and started. If the service already
+   * exists, no new service is created.
+   *
+   * @return whether it was added successfully
+   */
+  public boolean addApplication(ApplicationId appId) {
+    String appIdString = appId.toString();
+    AppLevelTimelineAggregator aggregator =
+        new AppLevelTimelineAggregator(appIdString);
+    return (aggregatorCollection.putIfAbsent(appIdString, aggregator)
+        == aggregator);
+  }
+
+  /**
+   * Removes the app level aggregator for the specified application id. The
+   * aggregator is also stopped as a result. If the aggregator does not exist, no
+   * change is made.
+   *
+   * @return whether it was removed successfully
+   */
+  public boolean removeApplication(ApplicationId appId) {
+    String appIdString = appId.toString();
+    return aggregatorCollection.remove(appIdString);
+  }
+
+  /**
+   * Creates and adds an app level aggregator for the specified application id.
+   * The aggregator is also initialized and started. If the aggregator already
+   * exists, no new aggregator is created.
+   */
+  @Override
+  public void initializeContainer(ContainerInitializationContext context) {
+    // intercept the event of the AM container being created and initialize the
+    // app level aggregator service
+    if (isApplicationMaster(context)) {
+      ApplicationId appId = context.getContainerId().
+          getApplicationAttemptId().getApplicationId();
+      addApplication(appId);
+    }
+  }
+
+  /**
+   * Removes the app level aggregator for the specified application id. The
+   * aggregator is also stopped as a result. If the aggregator does not exist, no
+   * change is made.
+   */
+  @Override
+  public void stopContainer(ContainerTerminationContext context) {
+    // intercept the event of the AM container being stopped and remove the app
+    // level aggregator service
+    if (isApplicationMaster(context)) {
+      ApplicationId appId = context.getContainerId().
+          getApplicationAttemptId().getApplicationId();
+      removeApplication(appId);
+    }
+  }
+
+  private boolean isApplicationMaster(ContainerContext context) {
+    // TODO this is based on a (shaky) assumption that the container id (the
+    // last field of the full container id) for an AM is always 1
+    // we want to make this much more reliable
+    ContainerId containerId = context.getContainerId();
+    return containerId.getContainerId() == 1L;
+  }
+
+  @VisibleForTesting
+  boolean hasApplication(String appId) {
+    return aggregatorCollection.containsKey(appId);
+  }
+
+  @Override
+  public void initializeApplication(ApplicationInitializationContext context) {
+  }
+
+  @Override
+  public void stopApplication(ApplicationTerminationContext context) {
+  }
+
+  @Override
+  public ByteBuffer getMetaData() {
+    // TODO currently it is not used; we can return a more meaningful data when
+    // we connect it with an AM
+    return ByteBuffer.allocate(0);
+  }
+
+  @VisibleForTesting
+  public static PerNodeTimelineAggregatorsAuxService launchServer(String[] args) {
+    Thread
+      .setDefaultUncaughtExceptionHandler(new YarnUncaughtExceptionHandler());
+    StringUtils.startupShutdownMessage(PerNodeTimelineAggregatorsAuxService.class, args,
+        LOG);
+    PerNodeTimelineAggregatorsAuxService auxService = null;
+    try {
+      auxService = new PerNodeTimelineAggregatorsAuxService();
+      ShutdownHookManager.get().addShutdownHook(new ShutdownHook(auxService),
+          SHUTDOWN_HOOK_PRIORITY);
+      YarnConfiguration conf = new YarnConfiguration();
+      auxService.init(conf);
+      auxService.start();
+    } catch (Throwable t) {
+      LOG.fatal("Error starting PerNodeAggregatorServer", t);
+      ExitUtil.terminate(-1, "Error starting PerNodeAggregatorServer");
+    }
+    return auxService;
+  }
+
+  private static class ShutdownHook implements Runnable {
+    private final PerNodeTimelineAggregatorsAuxService auxService;
+
+    public ShutdownHook(PerNodeTimelineAggregatorsAuxService auxService) {
+      this.auxService = auxService;
+    }
+
+    public void run() {
+      auxService.stop();
+    }
+  }
+
+  public static void main(String[] args) {
+    launchServer(args);
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregator.java
new file mode 100644
index 0000000..4227712
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregator.java
@@ -0,0 +1,107 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.service.CompositeService;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+
+/**
+ * Service that handles writes to the timeline service and writes them to the
+ * backing storage.
+ *
+ * Classes that extend this can putIfAbsent their own lifecycle management or
+ * customization of request handling.
+ */
+@Private
+@Unstable
+public abstract class TimelineAggregator extends CompositeService {
+  private static final Log LOG = LogFactory.getLog(TimelineAggregator.class);
+
+  public TimelineAggregator(String name) {
+    super(name);
+  }
+
+  @Override
+  protected void serviceInit(Configuration conf) throws Exception {
+    super.serviceInit(conf);
+  }
+
+  @Override
+  protected void serviceStart() throws Exception {
+    super.serviceStart();
+  }
+
+  @Override
+  protected void serviceStop() throws Exception {
+    super.serviceStop();
+  }
+
+  /**
+   * Handles entity writes. These writes are synchronous and are written to the
+   * backing storage without buffering/batching. If any entity already exists,
+   * it results in an update of the entity.
+   *
+   * This method should be reserved for selected critical entities and events.
+   * For normal voluminous writes one should use the async method
+   * {@link #postEntitiesAsync(TimelineEntities, UserGroupInformation)}.
+   *
+   * @param entities entities to post
+   * @param callerUgi the caller UGI
+   */
+  public void postEntities(TimelineEntities entities,
+      UserGroupInformation callerUgi) {
+    // Add this output temporarily for our prototype
+    // TODO remove this after we have an actual implementation
+    LOG.info("SUCCESS - TIMELINE V2 PROTOTYPE");
+    LOG.info("postEntities(entities=" + entities + ", callerUgi=" +
+        callerUgi + ")");
+
+    // TODO implement
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("postEntities(entities=" + entities + ", callerUgi=" +
+          callerUgi + ")");
+    }
+  }
+
+  /**
+   * Handles entity writes in an asynchronous manner. The method returns as soon
+   * as validation is done. No promises are made on how quickly it will be
+   * written to the backing storage or if it will always be written to the
+   * backing storage. Multiple writes to the same entities may be batched and
+   * appropriate values updated and result in fewer writes to the backing
+   * storage.
+   *
+   * @param entities entities to post
+   * @param callerUgi the caller UGI
+   */
+  public void postEntitiesAsync(TimelineEntities entities,
+      UserGroupInformation callerUgi) {
+    // TODO implement
+    if (LOG.isDebugEnabled()) {
+      LOG.debug("postEntitiesAsync(entities=" + entities + ", callerUgi=" +
+          callerUgi + ")");
+    }
+  }
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorWebService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorWebService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorWebService.java
new file mode 100644
index 0000000..7d42f94
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorWebService.java
@@ -0,0 +1,180 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import javax.servlet.ServletContext;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import javax.ws.rs.*;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceAudience.Public;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.timelineservice.TimelineEntities;
+import org.apache.hadoop.yarn.util.ConverterUtils;
+import org.apache.hadoop.yarn.webapp.ForbiddenException;
+import org.apache.hadoop.yarn.webapp.NotFoundException;
+
+import com.google.inject.Singleton;
+
+/**
+ * The main per-node REST end point for timeline service writes. It is
+ * essentially a container service that routes requests to the appropriate
+ * per-app services.
+ */
+@Private
+@Unstable
+@Singleton
+@Path("/ws/v2/timeline")
+public class TimelineAggregatorWebService {
+  private static final Log LOG =
+      LogFactory.getLog(TimelineAggregatorWebService.class);
+
+  private @Context ServletContext context;
+
+  @XmlRootElement(name = "about")
+  @XmlAccessorType(XmlAccessType.NONE)
+  @Public
+  @Unstable
+  public static class AboutInfo {
+
+    private String about;
+
+    public AboutInfo() {
+
+    }
+
+    public AboutInfo(String about) {
+      this.about = about;
+    }
+
+    @XmlElement(name = "About")
+    public String getAbout() {
+      return about;
+    }
+
+    public void setAbout(String about) {
+      this.about = about;
+    }
+
+  }
+
+  /**
+   * Return the description of the timeline web services.
+   */
+  @GET
+  @Produces({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
+  public AboutInfo about(
+      @Context HttpServletRequest req,
+      @Context HttpServletResponse res) {
+    init(res);
+    return new AboutInfo("Timeline API");
+  }
+
+  /**
+   * Accepts writes to the aggregator, and returns a response. It simply routes
+   * the request to the app level aggregator. It expects an application as a
+   * context.
+   */
+  @PUT
+  @Path("/entities")
+  @Consumes({ MediaType.APPLICATION_JSON /* , MediaType.APPLICATION_XML */})
+  public Response putEntities(
+      @Context HttpServletRequest req,
+      @Context HttpServletResponse res,
+      @QueryParam("async") String async,
+      @QueryParam("appid") String appId,
+      TimelineEntities entities) {
+    init(res);
+    UserGroupInformation callerUgi = getUser(req);
+    if (callerUgi == null) {
+      String msg = "The owner of the posted timeline entities is not set";
+      LOG.error(msg);
+      throw new ForbiddenException(msg);
+    }
+
+    // TODO how to express async posts and handle them
+    boolean isAsync = async != null && async.trim().equalsIgnoreCase("true");
+
+    try {
+      appId = parseApplicationId(appId);
+      if (appId == null) {
+        return Response.status(Response.Status.BAD_REQUEST).build();
+      }
+      TimelineAggregator service = getAggregatorService(req, appId);
+      if (service == null) {
+        LOG.error("Application not found");
+        throw new NotFoundException(); // different exception?
+      }
+      service.postEntities(entities, callerUgi);
+      return Response.ok().build();
+    } catch (Exception e) {
+      LOG.error("Error putting entities", e);
+      throw new WebApplicationException(e,
+          Response.Status.INTERNAL_SERVER_ERROR);
+    }
+  }
+
+  private String parseApplicationId(String appId) {
+    // Make sure the appId is not null and is valid
+    ApplicationId appID;
+    try {
+      if (appId != null) {
+        return ConverterUtils.toApplicationId(appId.trim()).toString();
+      } else {
+        return null;
+      }
+    } catch (Exception e) {
+      return null;
+    }
+  }
+
+  private TimelineAggregator
+      getAggregatorService(HttpServletRequest req, String appIdToParse) {
+    String appIdString = parseApplicationId(appIdToParse);
+    final TimelineAggregatorsCollection aggregatorCollection =
+        (TimelineAggregatorsCollection) context.getAttribute(
+            TimelineAggregatorsCollection.AGGREGATOR_COLLECTION_ATTR_KEY);
+    return aggregatorCollection.get(appIdString);
+  }
+
+  private void init(HttpServletResponse response) {
+    response.setContentType(null);
+  }
+
+  private UserGroupInformation getUser(HttpServletRequest req) {
+    String remoteUser = req.getRemoteUser();
+    UserGroupInformation callerUgi = null;
+    if (remoteUser != null) {
+      callerUgi = UserGroupInformation.createRemoteUser(remoteUser);
+    }
+    return callerUgi;
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorsCollection.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorsCollection.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorsCollection.java
new file mode 100644
index 0000000..73b6d52
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TimelineAggregatorsCollection.java
@@ -0,0 +1,203 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+import java.net.URI;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Unstable;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.http.HttpServer2;
+import org.apache.hadoop.http.lib.StaticUserWebFilter;
+import org.apache.hadoop.service.CompositeService;
+import org.apache.hadoop.yarn.conf.YarnConfiguration;
+import org.apache.hadoop.yarn.exceptions.YarnRuntimeException;
+import org.apache.hadoop.yarn.webapp.GenericExceptionHandler;
+import org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider;
+import org.apache.hadoop.yarn.webapp.util.WebAppUtils;
+
+import static org.apache.hadoop.fs.CommonConfigurationKeys.DEFAULT_HADOOP_HTTP_STATIC_USER;
+import static org.apache.hadoop.fs.CommonConfigurationKeys.HADOOP_HTTP_STATIC_USER;
+
+/**
+ * Class that manages adding and removing aggregators and their lifecycle. It
+ * provides thread safety access to the aggregators inside.
+ *
+ * It is a singleton, and instances should be obtained via
+ * {@link #getInstance()}.
+ */
+@Private
+@Unstable
+public class TimelineAggregatorsCollection extends CompositeService {
+  private static final Log LOG =
+      LogFactory.getLog(TimelineAggregatorsCollection.class);
+  private static final TimelineAggregatorsCollection INSTANCE =
+      new TimelineAggregatorsCollection();
+
+  // access to this map is synchronized with the map itself
+  private final Map<String, TimelineAggregator> aggregators =
+      Collections.synchronizedMap(
+          new HashMap<String, TimelineAggregator>());
+
+  // REST server for this aggregator collection
+  private HttpServer2 timelineRestServer;
+
+  static final String AGGREGATOR_COLLECTION_ATTR_KEY = "aggregator.collection";
+
+  static TimelineAggregatorsCollection getInstance() {
+    return INSTANCE;
+  }
+
+  TimelineAggregatorsCollection() {
+    super(TimelineAggregatorsCollection.class.getName());
+  }
+
+  @Override
+  protected void serviceStart() throws Exception {
+    startWebApp();
+    super.serviceStart();
+  }
+
+  @Override
+  protected void serviceStop() throws Exception {
+    if (timelineRestServer != null) {
+      timelineRestServer.stop();
+    }
+    super.serviceStop();
+  }
+
+  /**
+   * Put the aggregator into the collection if an aggregator mapped by id does
+   * not exist.
+   *
+   * @throws YarnRuntimeException if there was any exception in initializing and
+   * starting the app level service
+   * @return the aggregator associated with id after the potential put.
+   */
+  public TimelineAggregator putIfAbsent(String id, TimelineAggregator aggregator) {
+    synchronized (aggregators) {
+      TimelineAggregator aggregatorInTable = aggregators.get(id);
+      if (aggregatorInTable == null) {
+        try {
+          // initialize, start, and add it to the collection so it can be
+          // cleaned up when the parent shuts down
+          aggregator.init(getConfig());
+          aggregator.start();
+          aggregators.put(id, aggregator);
+          LOG.info("the aggregator for " + id + " was added");
+          return aggregator;
+        } catch (Exception e) {
+          throw new YarnRuntimeException(e);
+        }
+      } else {
+        String msg = "the aggregator for " + id + " already exists!";
+        LOG.error(msg);
+        return aggregatorInTable;
+      }
+    }
+  }
+
+  /**
+   * Removes the aggregator for the specified id. The aggregator is also stopped
+   * as a result. If the aggregator does not exist, no change is made.
+   *
+   * @return whether it was removed successfully
+   */
+  public boolean remove(String id) {
+    synchronized (aggregators) {
+      TimelineAggregator aggregator = aggregators.remove(id);
+      if (aggregator == null) {
+        String msg = "the aggregator for " + id + " does not exist!";
+        LOG.error(msg);
+        return false;
+      } else {
+        // stop the service to do clean up
+        aggregator.stop();
+        LOG.info("the aggregator service for " + id + " was removed");
+        return true;
+      }
+    }
+  }
+
+  /**
+   * Returns the aggregator for the specified id.
+   *
+   * @return the aggregator or null if it does not exist
+   */
+  public TimelineAggregator get(String id) {
+    return aggregators.get(id);
+  }
+
+  /**
+   * Returns whether the aggregator for the specified id exists in this
+   * collection.
+   */
+  public boolean containsKey(String id) {
+    return aggregators.containsKey(id);
+  }
+
+  /**
+   * Launch the REST web server for this aggregator collection
+   */
+  private void startWebApp() {
+    Configuration conf = getConfig();
+    // use the same ports as the old ATS for now; we could create new properties
+    // for the new timeline service if needed
+    String bindAddress = WebAppUtils.getWebAppBindURL(conf,
+        YarnConfiguration.TIMELINE_SERVICE_BIND_HOST,
+        WebAppUtils.getAHSWebAppURLWithoutScheme(conf));
+    LOG.info("Instantiating the per-node aggregator webapp at " + bindAddress);
+    try {
+      Configuration confForInfoServer = new Configuration(conf);
+      confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10);
+      HttpServer2.Builder builder = new HttpServer2.Builder()
+          .setName("timeline")
+          .setConf(conf)
+          .addEndpoint(URI.create("http://" + bindAddress));
+      timelineRestServer = builder.build();
+      // TODO: replace this by an authentication filter in future.
+      HashMap<String, String> options = new HashMap<>();
+      String username = conf.get(HADOOP_HTTP_STATIC_USER,
+          DEFAULT_HADOOP_HTTP_STATIC_USER);
+      options.put(HADOOP_HTTP_STATIC_USER, username);
+      HttpServer2.defineFilter(timelineRestServer.getWebAppContext(),
+          "static_user_filter_timeline",
+          StaticUserWebFilter.StaticUserFilter.class.getName(),
+          options, new String[] {"/*"});
+
+      timelineRestServer.addJerseyResourcePackage(
+          TimelineAggregatorWebService.class.getPackage().getName() + ";"
+              + GenericExceptionHandler.class.getPackage().getName() + ";"
+              + YarnJacksonJaxbJsonProvider.class.getPackage().getName(),
+          "/*");
+      timelineRestServer.setAttribute(AGGREGATOR_COLLECTION_ATTR_KEY,
+          TimelineAggregatorsCollection.getInstance());
+      timelineRestServer.start();
+    } catch (Exception e) {
+      String msg = "The per-node aggregator webapp failed to start.";
+      LOG.error(msg, e);
+      throw new YarnRuntimeException(msg, e);
+    }
+  }
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelAggregatorService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelAggregatorService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelAggregatorService.java
deleted file mode 100644
index c0af8c5..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelAggregatorService.java
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-
-public class TestAppLevelAggregatorService {
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelServiceManager.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelServiceManager.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelServiceManager.java
deleted file mode 100644
index 3f981c7..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelServiceManager.java
+++ /dev/null
@@ -1,102 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-import static org.mockito.Mockito.doReturn;
-import static org.mockito.Mockito.spy;
-
-import java.util.ArrayList;
-import java.util.List;
-import java.util.concurrent.Callable;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.Future;
-
-import org.apache.hadoop.conf.Configuration;
-import org.junit.Test;
-
-public class TestAppLevelServiceManager {
-
-  @Test(timeout=60000)
-  public void testMultithreadedAdd() throws Exception {
-    final AppLevelServiceManager serviceManager =
-        spy(new AppLevelServiceManager());
-    doReturn(new Configuration()).when(serviceManager).getConfig();
-
-    final int NUM_APPS = 5;
-    List<Callable<Boolean>> tasks = new ArrayList<Callable<Boolean>>();
-    for (int i = 0; i < NUM_APPS; i++) {
-      final String appId = String.valueOf(i);
-      Callable<Boolean> task = new Callable<Boolean>() {
-        public Boolean call() {
-          return serviceManager.addService(appId);
-        }
-      };
-      tasks.add(task);
-    }
-    ExecutorService executor = Executors.newFixedThreadPool(NUM_APPS);
-    try {
-      List<Future<Boolean>> futures = executor.invokeAll(tasks);
-      for (Future<Boolean> future: futures) {
-        assertTrue(future.get());
-      }
-    } finally {
-      executor.shutdownNow();
-    }
-    // check the keys
-    for (int i = 0; i < NUM_APPS; i++) {
-      assertTrue(serviceManager.hasService(String.valueOf(i)));
-    }
-  }
-
-  @Test
-  public void testMultithreadedAddAndRemove() throws Exception {
-    final AppLevelServiceManager serviceManager =
-        spy(new AppLevelServiceManager());
-    doReturn(new Configuration()).when(serviceManager).getConfig();
-
-    final int NUM_APPS = 5;
-    List<Callable<Boolean>> tasks = new ArrayList<Callable<Boolean>>();
-    for (int i = 0; i < NUM_APPS; i++) {
-      final String appId = String.valueOf(i);
-      Callable<Boolean> task = new Callable<Boolean>() {
-        public Boolean call() {
-          return serviceManager.addService(appId) &&
-              serviceManager.removeService(appId);
-        }
-      };
-      tasks.add(task);
-    }
-    ExecutorService executor = Executors.newFixedThreadPool(NUM_APPS);
-    try {
-      List<Future<Boolean>> futures = executor.invokeAll(tasks);
-      for (Future<Boolean> future: futures) {
-        assertTrue(future.get());
-      }
-    } finally {
-      executor.shutdownNow();
-    }
-    // check the keys
-    for (int i = 0; i < NUM_APPS; i++) {
-      assertFalse(serviceManager.hasService(String.valueOf(i)));
-    }
-  }
-}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelTimelineAggregator.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelTimelineAggregator.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelTimelineAggregator.java
new file mode 100644
index 0000000..8f95814
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestAppLevelTimelineAggregator.java
@@ -0,0 +1,23 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.yarn.server.timelineservice.aggregator;
+
+
+public class TestAppLevelTimelineAggregator {
+}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d3ff7f06/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestBaseAggregatorService.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestBaseAggregatorService.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestBaseAggregatorService.java
deleted file mode 100644
index 55953cd..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/test/java/org/apache/hadoop/yarn/server/timelineservice/aggregator/TestBaseAggregatorService.java
+++ /dev/null
@@ -1,23 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hadoop.yarn.server.timelineservice.aggregator;
-
-public class TestBaseAggregatorService {
-
-}


[08/43] hadoop git commit: YARN-3168. Convert site documentation from apt to markdown (Gururaj Shetty via aw)

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
deleted file mode 100644
index 57a47fd..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
+++ /dev/null
@@ -1,757 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Writing YARN
-  Applications
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop MapReduce Next Generation - Writing YARN Applications
-
-%{toc|section=1|fromDepth=0}
-
-* Purpose
-
-  This document describes, at a high-level, the way to implement new
-  Applications for YARN.
-
-* Concepts and Flow
-
-  The general concept is that an <application submission client> submits an
-  <application> to the YARN <ResourceManager> (RM). This can be done through
-  setting up a <<<YarnClient>>> object. After <<<YarnClient>>> is started, the
-  client can then set up application context, prepare the very first container of
-  the application that contains the <ApplicationMaster> (AM), and then submit
-  the application. You need to provide information such as the details about the
-  local files/jars that need to be available for your application to run, the
-  actual command that needs to be executed (with the necessary command line
-  arguments), any OS environment settings (optional), etc. Effectively, you
-  need to describe the Unix process(es) that needs to be launched for your
-  ApplicationMaster.
-
-  The YARN ResourceManager will then launch the ApplicationMaster (as
-  specified) on an allocated container. The ApplicationMaster communicates with
-  YARN cluster, and handles application execution. It performs operations in an
-  asynchronous fashion. During application launch time, the main tasks of the
-  ApplicationMaster are: a) communicating with the ResourceManager to negotiate
-  and allocate resources for future containers, and b) after container
-  allocation, communicating YARN <NodeManager>s (NMs) to launch application
-  containers on them. Task a) can be performed asynchronously through an
-  <<<AMRMClientAsync>>> object, with event handling methods specified in a
-  <<<AMRMClientAsync.CallbackHandler>>> type of event handler. The event handler
-  needs to be set to the client explicitly. Task b) can be performed by launching
-  a runnable object that then launches containers when there are containers
-  allocated. As part of launching this container, the AM has to
-  specify the <<<ContainerLaunchContext>>> that has the launch information such as
-  command line specification, environment, etc.
-
-  During the execution of an application, the ApplicationMaster communicates
-  NodeManagers through <<<NMClientAsync>>> object. All container events are
-  handled by <<<NMClientAsync.CallbackHandler>>>, associated with
-  <<<NMClientAsync>>>. A typical callback handler handles client start, stop,
-  status update and error. ApplicationMaster also reports execution progress to
-  ResourceManager by handling the <<<getProgress()>>> method of
-  <<<AMRMClientAsync.CallbackHandler>>>.
-  
-  Other than asynchronous clients, there are synchronous versions for certain
-  workflows (<<<AMRMClient>>> and <<<NMClient>>>). The asynchronous clients are
-  recommended because of (subjectively) simpler usages, and this article
-  will mainly cover the asynchronous clients. Please refer to <<<AMRMClient>>>
-  and <<<NMClient>>> for more information on synchronous clients.
-
-* Interfaces
-
-  The interfaces you'd most like be concerned with are:
-
-  * <<Client>>\<--\><<ResourceManager>>\
-    By using <<<YarnClient>>> objects.
-
-  * <<ApplicationMaster>>\<--\><<ResourceManager>>\
-    By using <<<AMRMClientAsync>>> objects, handling events asynchronously by
-    <<<AMRMClientAsync.CallbackHandler>>>
-
-  * <<ApplicationMaster>>\<--\><<NodeManager>>\
-    Launch containers. Communicate with NodeManagers
-    by using <<<NMClientAsync>>> objects, handling container events by
-    <<<NMClientAsync.CallbackHandler>>>
-
-  []
-
-  <<Note>>
-  
-    * The three main protocols for YARN application (ApplicationClientProtocol,
-      ApplicationMasterProtocol and ContainerManagementProtocol) are still
-      preserved. The 3 clients wrap these 3 protocols to provide simpler
-      programming model for YARN applications.
-    
-    * Under very rare circumstances, programmer may want to directly use the 3
-      protocols to implement an application. However, note that <such behaviors
-      are no longer encouraged for general use cases>.
-
-    []
-
-* Writing a Simple Yarn Application
-
-** Writing a simple Client
-
-  * The first step that a client needs to do is to initialize and start a
-    YarnClient.
-
-+---+
-  YarnClient yarnClient = YarnClient.createYarnClient();
-  yarnClient.init(conf);
-  yarnClient.start();
-+---+
-
-  * Once a client is set up, the client needs to create an application, and get
-    its application id.
-
-+---+
-  YarnClientApplication app = yarnClient.createApplication();
-  GetNewApplicationResponse appResponse = app.getNewApplicationResponse();
-+---+
-
-  * The response from the <<<YarnClientApplication>>> for a new application also
-    contains information about the cluster such as the minimum/maximum resource
-    capabilities of the cluster. This is required so that to ensure that you can
-    correctly set the specifications of the container in which the
-    ApplicationMaster would be launched. Please refer to
-    <<<GetNewApplicationResponse>>> for more details.
-
-  * The main crux of a client is to setup the <<<ApplicationSubmissionContext>>>
-    which defines all the information needed by the RM to launch the AM. A client
-    needs to set the following into the context:
-
-    * Application info: id, name
-
-    * Queue, priority info: Queue to which the application will be submitted,
-      the priority to be assigned for the application.
-
-    * User: The user submitting the application
-
-    * <<<ContainerLaunchContext>>>: The information defining the container in
-      which the AM will be launched and run. The <<<ContainerLaunchContext>>>, as
-      mentioned previously, defines all the required information needed to run
-      the application such as the local <<R>>esources (binaries, jars, files
-      etc.), <<E>>nvironment settings (CLASSPATH etc.), the <<C>>ommand to be
-      executed and security <<T>>okens (<RECT>).
-
-    []
-
-+---+
-  // set the application submission context
-  ApplicationSubmissionContext appContext = app.getApplicationSubmissionContext();
-  ApplicationId appId = appContext.getApplicationId();
-
-  appContext.setKeepContainersAcrossApplicationAttempts(keepContainers);
-  appContext.setApplicationName(appName);
-
-  // set local resources for the application master
-  // local files or archives as needed
-  // In this scenario, the jar file for the application master is part of the local resources
-  Map<String, LocalResource> localResources = new HashMap<String, LocalResource>();
-
-  LOG.info("Copy App Master jar from local filesystem and add to local environment");
-  // Copy the application master jar to the filesystem
-  // Create a local resource to point to the destination jar path
-  FileSystem fs = FileSystem.get(conf);
-  addToLocalResources(fs, appMasterJar, appMasterJarPath, appId.toString(),
-      localResources, null);
-
-  // Set the log4j properties if needed
-  if (!log4jPropFile.isEmpty()) {
-    addToLocalResources(fs, log4jPropFile, log4jPath, appId.toString(),
-        localResources, null);
-  }
-
-  // The shell script has to be made available on the final container(s)
-  // where it will be executed.
-  // To do this, we need to first copy into the filesystem that is visible
-  // to the yarn framework.
-  // We do not need to set this as a local resource for the application
-  // master as the application master does not need it.
-  String hdfsShellScriptLocation = "";
-  long hdfsShellScriptLen = 0;
-  long hdfsShellScriptTimestamp = 0;
-  if (!shellScriptPath.isEmpty()) {
-    Path shellSrc = new Path(shellScriptPath);
-    String shellPathSuffix =
-        appName + "/" + appId.toString() + "/" + SCRIPT_PATH;
-    Path shellDst =
-        new Path(fs.getHomeDirectory(), shellPathSuffix);
-    fs.copyFromLocalFile(false, true, shellSrc, shellDst);
-    hdfsShellScriptLocation = shellDst.toUri().toString();
-    FileStatus shellFileStatus = fs.getFileStatus(shellDst);
-    hdfsShellScriptLen = shellFileStatus.getLen();
-    hdfsShellScriptTimestamp = shellFileStatus.getModificationTime();
-  }
-
-  if (!shellCommand.isEmpty()) {
-    addToLocalResources(fs, null, shellCommandPath, appId.toString(),
-        localResources, shellCommand);
-  }
-
-  if (shellArgs.length > 0) {
-    addToLocalResources(fs, null, shellArgsPath, appId.toString(),
-        localResources, StringUtils.join(shellArgs, " "));
-  }
-
-  // Set the env variables to be setup in the env where the application master will be run
-  LOG.info("Set the environment for the application master");
-  Map<String, String> env = new HashMap<String, String>();
-
-  // put location of shell script into env
-  // using the env info, the application master will create the correct local resource for the
-  // eventual containers that will be launched to execute the shell scripts
-  env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTLOCATION, hdfsShellScriptLocation);
-  env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTTIMESTAMP, Long.toString(hdfsShellScriptTimestamp));
-  env.put(DSConstants.DISTRIBUTEDSHELLSCRIPTLEN, Long.toString(hdfsShellScriptLen));
-
-  // Add AppMaster.jar location to classpath
-  // At some point we should not be required to add
-  // the hadoop specific classpaths to the env.
-  // It should be provided out of the box.
-  // For now setting all required classpaths including
-  // the classpath to "." for the application jar
-  StringBuilder classPathEnv = new StringBuilder(Environment.CLASSPATH.$$())
-    .append(ApplicationConstants.CLASS_PATH_SEPARATOR).append("./*");
-  for (String c : conf.getStrings(
-      YarnConfiguration.YARN_APPLICATION_CLASSPATH,
-      YarnConfiguration.DEFAULT_YARN_CROSS_PLATFORM_APPLICATION_CLASSPATH)) {
-    classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR);
-    classPathEnv.append(c.trim());
-  }
-  classPathEnv.append(ApplicationConstants.CLASS_PATH_SEPARATOR).append(
-    "./log4j.properties");
-
-  // Set the necessary command to execute the application master
-  Vector<CharSequence> vargs = new Vector<CharSequence>(30);
-
-  // Set java executable command
-  LOG.info("Setting up app master command");
-  vargs.add(Environment.JAVA_HOME.$$() + "/bin/java");
-  // Set Xmx based on am memory size
-  vargs.add("-Xmx" + amMemory + "m");
-  // Set class name
-  vargs.add(appMasterMainClass);
-  // Set params for Application Master
-  vargs.add("--container_memory " + String.valueOf(containerMemory));
-  vargs.add("--container_vcores " + String.valueOf(containerVirtualCores));
-  vargs.add("--num_containers " + String.valueOf(numContainers));
-  vargs.add("--priority " + String.valueOf(shellCmdPriority));
-
-  for (Map.Entry<String, String> entry : shellEnv.entrySet()) {
-    vargs.add("--shell_env " + entry.getKey() + "=" + entry.getValue());
-  }
-  if (debugFlag) {
-    vargs.add("--debug");
-  }
-
-  vargs.add("1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/AppMaster.stdout");
-  vargs.add("2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/AppMaster.stderr");
-
-  // Get final commmand
-  StringBuilder command = new StringBuilder();
-  for (CharSequence str : vargs) {
-    command.append(str).append(" ");
-  }
-
-  LOG.info("Completed setting up app master command " + command.toString());
-  List<String> commands = new ArrayList<String>();
-  commands.add(command.toString());
-
-  // Set up the container launch context for the application master
-  ContainerLaunchContext amContainer = ContainerLaunchContext.newInstance(
-    localResources, env, commands, null, null, null);
-
-  // Set up resource type requirements
-  // For now, both memory and vcores are supported, so we set memory and
-  // vcores requirements
-  Resource capability = Resource.newInstance(amMemory, amVCores);
-  appContext.setResource(capability);
-
-  // Service data is a binary blob that can be passed to the application
-  // Not needed in this scenario
-  // amContainer.setServiceData(serviceData);
-
-  // Setup security tokens
-  if (UserGroupInformation.isSecurityEnabled()) {
-    // Note: Credentials class is marked as LimitedPrivate for HDFS and MapReduce
-    Credentials credentials = new Credentials();
-    String tokenRenewer = conf.get(YarnConfiguration.RM_PRINCIPAL);
-    if (tokenRenewer == null || tokenRenewer.length() == 0) {
-      throw new IOException(
-        "Can't get Master Kerberos principal for the RM to use as renewer");
-    }
-
-    // For now, only getting tokens for the default file-system.
-    final Token<?> tokens[] =
-        fs.addDelegationTokens(tokenRenewer, credentials);
-    if (tokens != null) {
-      for (Token<?> token : tokens) {
-        LOG.info("Got dt for " + fs.getUri() + "; " + token);
-      }
-    }
-    DataOutputBuffer dob = new DataOutputBuffer();
-    credentials.writeTokenStorageToStream(dob);
-    ByteBuffer fsTokens = ByteBuffer.wrap(dob.getData(), 0, dob.getLength());
-    amContainer.setTokens(fsTokens);
-  }
-
-  appContext.setAMContainerSpec(amContainer);
-+---+
-
-  * After the setup process is complete, the client is ready to submit
-    the application with specified priority and queue.
-
-+---+
-  // Set the priority for the application master
-  Priority pri = Priority.newInstance(amPriority);
-  appContext.setPriority(pri);
-
-  // Set the queue to which this application is to be submitted in the RM
-  appContext.setQueue(amQueue);
-
-  // Submit the application to the applications manager
-  // SubmitApplicationResponse submitResp = applicationsManager.submitApplication(appRequest);
-
-  yarnClient.submitApplication(appContext);
-+---+
-
-  * At this point, the RM will have accepted the application and in the
-    background, will go through the process of allocating a container with the
-    required specifications and then eventually setting up and launching the AM
-    on the allocated container.
-
-  * There are multiple ways a client can track progress of the actual task.
-
-    * It can communicate with the RM and request for a report of the application
-      via the <<<getApplicationReport()>>> method of <<<YarnClient>>>.
-
-+-----+
-  // Get application report for the appId we are interested in
-  ApplicationReport report = yarnClient.getApplicationReport(appId);
-+-----+
-
-      The <<<ApplicationReport>>> received from the RM consists of the following:
-
-        * General application information: Application id, queue to which the
-          application was submitted, user who submitted the application and the
-          start time for the application.
-
-        * ApplicationMaster details: the host on which the AM is running, the
-          rpc port (if any) on which it is listening for requests from clients
-          and a token that the client needs to communicate with the AM.
-
-        * Application tracking information: If the application supports some form
-          of progress tracking, it can set a tracking url which is available via
-          <<<ApplicationReport>>>'s <<<getTrackingUrl()>>> method that a client
-          can look at to monitor progress.
-
-        * Application status: The state of the application as seen by the
-          ResourceManager is available via
-          <<<ApplicationReport#getYarnApplicationState>>>. If the
-          <<<YarnApplicationState>>> is set to <<<FINISHED>>>, the client should
-          refer to <<<ApplicationReport#getFinalApplicationStatus>>> to check for
-          the actual success/failure of the application task itself. In case of
-          failures, <<<ApplicationReport#getDiagnostics>>> may be useful to shed
-          some more light on the the failure.
-
-    * If the ApplicationMaster supports it, a client can directly query the AM
-      itself for progress updates via the host:rpcport information obtained from
-      the application report. It can also use the tracking url obtained from the
-      report if available.
-
-  * In certain situations, if the application is taking too long or due to other
-    factors, the client may wish to kill the application. <<<YarnClient>>>
-    supports the <<<killApplication>>> call that allows a client to send a kill
-    signal to the AM via the ResourceManager. An ApplicationMaster if so
-    designed may also support an abort call via its rpc layer that a client may
-    be able to leverage.
-
-+---+
-  yarnClient.killApplication(appId);
-+---+
-
-** Writing an ApplicationMaster (AM)
-
-  * The AM is the actual owner of the job. It will be launched
-    by the RM and via the client will be provided all the
-    necessary information and resources about the job that it has been tasked
-    with to oversee and complete.
-
-  * As the AM is launched within a container that may (likely
-    will) be sharing a physical host with other containers, given the
-    multi-tenancy nature, amongst other issues, it cannot make any assumptions
-    of things like pre-configured ports that it can listen on.
-
-  * When the AM starts up, several parameters are made available
-    to it via the environment. These include the <<<ContainerId>>> for the
-    AM container, the application submission time and details
-    about the NM (NodeManager) host running the ApplicationMaster.
-    Ref <<<ApplicationConstants>>> for parameter names.
-
-  * All interactions with the RM require an <<<ApplicationAttemptId>>> (there can
-    be multiple attempts per application in case of failures). The
-    <<<ApplicationAttemptId>>> can be obtained from the AM's container id. There
-    are helper APIs to convert the value obtained from the environment into
-    objects.
-
-+---+
-  Map<String, String> envs = System.getenv();
-  String containerIdString =
-      envs.get(ApplicationConstants.AM_CONTAINER_ID_ENV);
-  if (containerIdString == null) {
-    // container id should always be set in the env by the framework
-    throw new IllegalArgumentException(
-        "ContainerId not set in the environment");
-  }
-  ContainerId containerId = ConverterUtils.toContainerId(containerIdString);
-  ApplicationAttemptId appAttemptID = containerId.getApplicationAttemptId();
-+---+
-
-  * After an AM has initialized itself completely, we can start the two clients:
-    one to ResourceManager, and one to NodeManagers. We set them up with our
-    customized event handler, and we will talk about those event handlers in
-    detail later in this article.
-
-+---+
-  AMRMClientAsync.CallbackHandler allocListener = new RMCallbackHandler();
-  amRMClient = AMRMClientAsync.createAMRMClientAsync(1000, allocListener);
-  amRMClient.init(conf);
-  amRMClient.start();
-
-  containerListener = createNMCallbackHandler();
-  nmClientAsync = new NMClientAsyncImpl(containerListener);
-  nmClientAsync.init(conf);
-  nmClientAsync.start();
-+---+
-
-  * The AM has to emit heartbeats to the RM to keep it informed that the AM is
-    alive and still running. The timeout expiry interval at the RM is defined by
-    a config setting accessible via
-    <<<YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS>>> with the default being
-    defined by <<<YarnConfiguration.DEFAULT_RM_AM_EXPIRY_INTERVAL_MS>>>. The
-    ApplicationMaster needs to register itself with the ResourceManager to
-    start hearbeating.
-
-+---+
-  // Register self with ResourceManager
-  // This will start heartbeating to the RM
-  appMasterHostname = NetUtils.getHostname();
-  RegisterApplicationMasterResponse response = amRMClient
-      .registerApplicationMaster(appMasterHostname, appMasterRpcPort,
-          appMasterTrackingUrl);
-+---+
-
-  * In the response of the registration, maximum resource capability if included. You may want to use this to check the application's request.
-
-+---+
-  // Dump out information about cluster capability as seen by the
-  // resource manager
-  int maxMem = response.getMaximumResourceCapability().getMemory();
-  LOG.info("Max mem capabililty of resources in this cluster " + maxMem);
-
-  int maxVCores = response.getMaximumResourceCapability().getVirtualCores();
-  LOG.info("Max vcores capabililty of resources in this cluster " + maxVCores);
-
-  // A resource ask cannot exceed the max.
-  if (containerMemory > maxMem) {
-    LOG.info("Container memory specified above max threshold of cluster."
-        + " Using max value." + ", specified=" + containerMemory + ", max="
-        + maxMem);
-    containerMemory = maxMem;
-  }
-
-  if (containerVirtualCores > maxVCores) {
-    LOG.info("Container virtual cores specified above max threshold of  cluster."
-      + " Using max value." + ", specified=" + containerVirtualCores + ", max="
-      + maxVCores);
-    containerVirtualCores = maxVCores;
-  }
-  List<Container> previousAMRunningContainers =
-      response.getContainersFromPreviousAttempts();
-  LOG.info("Received " + previousAMRunningContainers.size()
-          + " previous AM's running containers on AM registration.");
-+---+
-
-  * Based on the task requirements, the AM can ask for a set of containers to run
-    its tasks on. We can now calculate how many containers we need, and request
-    those many containers.
-
-+---+
-  List<Container> previousAMRunningContainers =
-      response.getContainersFromPreviousAttempts();
-  List<Container> previousAMRunningContainers =
-      response.getContainersFromPreviousAttempts();
-  LOG.info("Received " + previousAMRunningContainers.size()
-      + " previous AM's running containers on AM registration.");
-
-  int numTotalContainersToRequest =
-      numTotalContainers - previousAMRunningContainers.size();
-  // Setup ask for containers from RM
-  // Send request for containers to RM
-  // Until we get our fully allocated quota, we keep on polling RM for
-  // containers
-  // Keep looping until all the containers are launched and shell script
-  // executed on them ( regardless of success/failure).
-  for (int i = 0; i < numTotalContainersToRequest; ++i) {
-    ContainerRequest containerAsk = setupContainerAskForRM();
-    amRMClient.addContainerRequest(containerAsk);
-  }
-+---+
-
-  * In <<<setupContainerAskForRM()>>>, the follow two things need some set up:
-
-    * Resource capability: Currently, YARN supports memory based resource
-      requirements so the request should define how much memory is needed. The
-      value is defined in MB and has to less than the max capability of the
-      cluster and an exact multiple of the min capability. Memory resources
-      correspond to physical memory limits imposed on the task containers. It
-      will also support computation based resource (vCore), as shown in the code.
-
-    * Priority: When asking for sets of containers, an AM may define different
-      priorities to each set. For example, the Map-Reduce AM may assign a higher
-      priority to containers needed for the Map tasks and a lower priority for
-      the Reduce tasks' containers.
-
-    []
-
-+---+
-  private ContainerRequest setupContainerAskForRM() {
-    // setup requirements for hosts
-    // using * as any host will do for the distributed shell app
-    // set the priority for the request
-    Priority pri = Priority.newInstance(requestPriority);
-
-    // Set up resource type requirements
-    // For now, memory and CPU are supported so we set memory and cpu requirements
-    Resource capability = Resource.newInstance(containerMemory,
-      containerVirtualCores);
-
-    ContainerRequest request = new ContainerRequest(capability, null, null,
-        pri);
-    LOG.info("Requested container ask: " + request.toString());
-    return request;
-  }
-+---+
-
-  * After container allocation requests have been sent by the application
-    manager, contailers will be launched asynchronously, by the event handler of
-    the <<<AMRMClientAsync>>> client. The handler should implement
-    <<<AMRMClientAsync.CallbackHandler>>> interface.
-
-    * When there are containers allocated, the handler sets up a thread that runs
-      the code to launch containers. Here we use the name
-      <<<LaunchContainerRunnable>>> to demonstrate. We will talk about the
-      <<<LaunchContainerRunnable>>> class in the following part of this article.
-
-+---+
-  @Override
-  public void onContainersAllocated(List<Container> allocatedContainers) {
-    LOG.info("Got response from RM for container ask, allocatedCnt="
-        + allocatedContainers.size());
-    numAllocatedContainers.addAndGet(allocatedContainers.size());
-    for (Container allocatedContainer : allocatedContainers) {
-      LaunchContainerRunnable runnableLaunchContainer =
-          new LaunchContainerRunnable(allocatedContainer, containerListener);
-      Thread launchThread = new Thread(runnableLaunchContainer);
-
-      // launch and start the container on a separate thread to keep
-      // the main thread unblocked
-      // as all containers may not be allocated at one go.
-      launchThreads.add(launchThread);
-      launchThread.start();
-    }
-  }
-+---+
-
-    * On heart beat, the event handler reports the progress of the application.
-
-+---+
-  @Override
-  public float getProgress() {
-    // set progress to deliver to RM on next heartbeat
-    float progress = (float) numCompletedContainers.get()
-        / numTotalContainers;
-    return progress;
-  }
-+---+
-
-    []
-
-  * The container launch thread actually launches the containers on NMs. After a
-    container has been allocated to the AM, it needs to follow a similar process
-    that the client followed in setting up the <<<ContainerLaunchContext>>> for
-    the eventual task that is going to be running on the allocated Container.
-    Once the <<<ContainerLaunchContext>>> is defined, the AM can start it through
-    the <<<NMClientAsync>>>.
-
-+---+
-  // Set the necessary command to execute on the allocated container
-  Vector<CharSequence> vargs = new Vector<CharSequence>(5);
-
-  // Set executable command
-  vargs.add(shellCommand);
-  // Set shell script path
-  if (!scriptPath.isEmpty()) {
-    vargs.add(Shell.WINDOWS ? ExecBatScripStringtPath
-      : ExecShellStringPath);
-  }
-
-  // Set args for the shell command if any
-  vargs.add(shellArgs);
-  // Add log redirect params
-  vargs.add("1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout");
-  vargs.add("2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr");
-
-  // Get final commmand
-  StringBuilder command = new StringBuilder();
-  for (CharSequence str : vargs) {
-    command.append(str).append(" ");
-  }
-
-  List<String> commands = new ArrayList<String>();
-  commands.add(command.toString());
-
-  // Set up ContainerLaunchContext, setting local resource, environment,
-  // command and token for constructor.
-
-  // Note for tokens: Set up tokens for the container too. Today, for normal
-  // shell commands, the container in distribute-shell doesn't need any
-  // tokens. We are populating them mainly for NodeManagers to be able to
-  // download anyfiles in the distributed file-system. The tokens are
-  // otherwise also useful in cases, for e.g., when one is running a
-  // "hadoop dfs" command inside the distributed shell.
-  ContainerLaunchContext ctx = ContainerLaunchContext.newInstance(
-    localResources, shellEnv, commands, null, allTokens.duplicate(), null);
-  containerListener.addContainer(container.getId(), container);
-  nmClientAsync.startContainerAsync(container, ctx);
-+---+
-
-  * The <<<NMClientAsync>>> object, together with its event handler, handles container events. Including container start, stop, status update, and occurs an error.
-  
-  * After the ApplicationMaster determines the work is done, it needs to unregister itself through the AM-RM client, and then stops the client. 
-
-+---+
-  try {
-    amRMClient.unregisterApplicationMaster(appStatus, appMessage, null);
-  } catch (YarnException ex) {
-    LOG.error("Failed to unregister application", ex);
-  } catch (IOException e) {
-    LOG.error("Failed to unregister application", e);
-  }
-  
-  amRMClient.stop();
-+---+
-
-~~** Defining the context in which your code runs
-
-~~*** Container Resource Requests
-
-~~*** Local Resources
-
-~~*** Environment
-
-~~**** Managing the CLASSPATH
-
-~~** Security
-
-* FAQ
-
-** How can I distribute my application's jars to all of the nodes in the YARN
-   cluster that need it?
-
-  * You can use the LocalResource to add resources to your application request.
-    This will cause YARN to distribute the resource to the ApplicationMaster
-    node. If the resource is a tgz, zip, or jar - you can have YARN unzip it.
-    Then, all you need to do is add the unzipped folder to your classpath. For
-    example, when creating your application request:
-
-+---+
-  File packageFile = new File(packagePath);
-  Url packageUrl = ConverterUtils.getYarnUrlFromPath(
-      FileContext.getFileContext.makeQualified(new Path(packagePath)));
-
-  packageResource.setResource(packageUrl);
-  packageResource.setSize(packageFile.length());
-  packageResource.setTimestamp(packageFile.lastModified());
-  packageResource.setType(LocalResourceType.ARCHIVE);
-  packageResource.setVisibility(LocalResourceVisibility.APPLICATION);
-
-  resource.setMemory(memory);
-  containerCtx.setResource(resource);
-  containerCtx.setCommands(ImmutableList.of(
-      "java -cp './package/*' some.class.to.Run "
-      + "1>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stdout "
-      + "2>" + ApplicationConstants.LOG_DIR_EXPANSION_VAR + "/stderr"));
-  containerCtx.setLocalResources(
-      Collections.singletonMap("package", packageResource));
-  appCtx.setApplicationId(appId);
-  appCtx.setUser(user.getShortUserName);
-  appCtx.setAMContainerSpec(containerCtx);
-  yarnClient.submitApplication(appCtx);
-+---+
-
-  As you can see, the <<<setLocalResources>>> command takes a map of names to
-  resources. The name becomes a sym link in your application's cwd, so you can
-  just refer to the artifacts inside by using ./package/*.
-
-  Note: Java's classpath (cp) argument is VERY sensitive.
-  Make sure you get the syntax EXACTLY correct.
-
-  Once your package is distributed to your AM, you'll need to follow the same
-  process whenever your AM starts a new container (assuming you want the
-  resources to be sent to your container). The code for this is the same. You
-  just need to make sure that you give your AM the package path (either HDFS, or
-  local), so that it can send the resource URL along with the container ctx.
-
-** How do I get the ApplicationMaster's <<<ApplicationAttemptId>>>?
-
-  * The <<<ApplicationAttemptId>>> will be passed to the AM via the environment
-    and the value from the environment can be converted into an
-    <<<ApplicationAttemptId>>> object via the ConverterUtils helper function.
-
-** Why my container is killed by the NodeManager?
-
-  * This is likely due to high memory usage exceeding your requested container
-    memory size. There are a number of reasons that can cause this. First, look
-    at the process tree that the NodeManager dumps when it kills your container.
-    The two things you're interested in are physical memory and virtual memory.
-    If you have exceeded physical memory limits your app is using too much
-    physical memory. If you're running a Java app, you can use -hprof to look at
-    what is taking up space in the heap. If you have exceeded virtual memory, you
-    may need to increase the value of the the cluster-wide configuration variable
-    <<<yarn.nodemanager.vmem-pmem-ratio>>>.
-
-** How do I include native libraries?
-
-  * Setting <<<-Djava.library.path>>> on the command line while launching a
-    container can cause native libraries used by Hadoop to not be loaded
-    correctly and can result in errors. It is cleaner to use
-    <<<LD_LIBRARY_PATH>>> instead.
-
-* Useful Links
-
-  * {{{http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html}YARN Architecture}}
-
-  * {{{http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html}YARN Capacity Scheduler}}
-
-  * {{{http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html}YARN Fair Scheduler}}
-
-* Sample code
-
-  * Yarn distributed shell: in <<<hadoop-yarn-applications-distributedshell>>>
-    project after you set up your development environment.
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YARN.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YARN.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YARN.apt.vm
deleted file mode 100644
index 465c5d1..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YARN.apt.vm
+++ /dev/null
@@ -1,77 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  YARN
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Apache Hadoop NextGen MapReduce (YARN)
-
-  MapReduce has undergone a complete overhaul in hadoop-0.23 and we now have, 
-  what we call, MapReduce 2.0 (MRv2) or YARN.
-
-  The fundamental idea of MRv2 is to split up the two major functionalities of 
-  the JobTracker, resource management and job scheduling/monitoring, into 
-  separate daemons. The idea is to have a global ResourceManager (<RM>) and 
-  per-application ApplicationMaster (<AM>).  An application is either a single 
-  job in the classical sense of Map-Reduce jobs or a DAG of jobs. 
-
-  The ResourceManager and per-node slave, the NodeManager (<NM>), form the 
-  data-computation framework. The ResourceManager is the ultimate authority that 
-  arbitrates resources among all the applications in the system. 
-
-  The per-application ApplicationMaster is, in effect, a framework specific 
-  library and is tasked with negotiating resources from the ResourceManager and 
-  working with the NodeManager(s) to execute and monitor the tasks.
-
-[./yarn_architecture.gif] MapReduce NextGen Architecture
-
-  The ResourceManager has two main components: Scheduler and 
-  ApplicationsManager.
-
-  The Scheduler is responsible for allocating resources to the various running 
-  applications subject to familiar constraints of capacities, queues etc. The 
-  Scheduler is pure scheduler in the sense that it performs no monitoring or 
-  tracking of status for the application. Also, it offers no guarantees about 
-  restarting failed tasks either due to application failure or hardware 
-  failures. The Scheduler performs its scheduling function based the resource 
-  requirements of the applications; it does so based on the abstract notion of 
-  a resource <Container> which incorporates elements such as memory, cpu, disk, 
-  network etc. In the first version, only <<<memory>>> is supported. 
-
-  The Scheduler has a pluggable policy plug-in, which is responsible for 
-  partitioning the cluster resources among the various queues, applications etc. 
-  The current Map-Reduce schedulers such as the CapacityScheduler and the 
-  FairScheduler would be some examples of the plug-in. 
-
-  The CapacityScheduler supports <<<hierarchical queues>>> to allow for more 
-  predictable sharing of cluster resources
-
-  The ApplicationsManager is responsible for accepting job-submissions, 
-  negotiating the first container for executing the application specific 
-  ApplicationMaster and provides the service for restarting the
-  ApplicationMaster container on failure.
-
-  The NodeManager is the per-machine framework agent who is responsible for 
-  containers, monitoring their resource usage (cpu, memory, disk, network) and 
-  reporting the same to the ResourceManager/Scheduler.
-
-  The per-application ApplicationMaster has the responsibility of negotiating 
-  appropriate resource containers from the Scheduler, tracking their status and 
-  monitoring for progress.
-
-  MRV2 maintains <<API compatibility>> with previous stable release 
-  (hadoop-1.x).  This means that all Map-Reduce jobs should still run 
-  unchanged on top of MRv2 with just a recompile.
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YarnCommands.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YarnCommands.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YarnCommands.apt.vm
deleted file mode 100644
index 67f8a58..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/YarnCommands.apt.vm
+++ /dev/null
@@ -1,369 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  YARN Commands
-  ---
-  ---
-  ${maven.build.timestamp}
-
-YARN Commands
-
-%{toc|section=1|fromDepth=0}
-
-* Overview
-
-  YARN commands are invoked by the bin/yarn script. Running the yarn script
-  without any arguments prints the description for all commands.
-
- Usage: <<<yarn [SHELL_OPTIONS] COMMAND [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
-
-  YARN has an option parsing framework that employs parsing generic options as
-  well as running classes.
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*-------------------------+-------------+
-| SHELL_OPTIONS | The common set of shell options. These are documented on the {{{../../hadoop-project-dist/hadoop-common/CommandsManual.html#Shell Options}Commands Manual}} page.
-*-------------------------+----+
-| GENERIC_OPTIONS | The common set of options supported by multiple commands. See the Hadoop {{{../../hadoop-project-dist/hadoop-common/CommandsManual.html#Generic Options}Commands Manual}} for more information.
-*------------------+---------------+
-| COMMAND COMMAND_OPTIONS | Various commands with their options are described
-|                         | in the following sections. The commands have been
-|                         | grouped into {{User Commands}} and
-|                         | {{Administration Commands}}.
-*-------------------------+--------------+
-
-* {User Commands}
-
-  Commands useful for users of a Hadoop cluster.
-
-** <<<application>>>
-
-  Usage: <<<yarn application [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -appStates States | Works with -list to filter applications based on input
-|                   | comma-separated list of application states. The valid
-|                   | application state can be one of the following: \
-|                   | ALL, NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING,
-|                   | FINISHED, FAILED, KILLED
-*---------------+--------------+
-| -appTypes Types | Works with -list to filter applications based on input
-|                 | comma-separated list of application types.
-*---------------+--------------+
-| -list | Lists applications from the RM. Supports optional use of -appTypes
-|       | to filter applications based on application type, and -appStates to
-|       | filter applications based on application state.
-*---------------+--------------+
-| -kill ApplicationId | Kills the application.
-*---------------+--------------+
-| -status  ApplicationId | Prints the status of the application.
-*---------------+--------------+
-
-  Prints application(s) report/kill application
-
-** <<<applicationattempt>>>
-
-  Usage: <<<yarn applicationattempt [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -help | Help
-*---------------+--------------+
-| -list ApplicationId | Lists applications attempts from the RM
-*---------------+--------------+
-| -status  Application Attempt Id | Prints the status of the application attempt.
-*---------------+--------------+
-
-  prints applicationattempt(s) report
-
-** <<<classpath>>>
-
-  Usage: <<<yarn classpath>>>
-
-  Prints the class path needed to get the Hadoop jar and the required libraries
-
-
-** <<<container>>>
-
-  Usage: <<<yarn container [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -help | Help
-*---------------+--------------+
-| -list ApplicationId | Lists containers for the application attempt.
-*---------------+--------------+
-| -status  ContainerId | Prints the status of the container.
-*---------------+--------------+
-
-  prints container(s) report
-
-** <<<jar>>>
-
-  Usage: <<<yarn jar <jar> [mainClass] args... >>>
-
-  Runs a jar file. Users can bundle their YARN code in a jar file and execute
-  it using this command.
-
-** <<<logs>>>
-
-  Usage: <<<yarn logs -applicationId <application ID> [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -applicationId \<application ID\> | Specifies an application id |
-*---------------+--------------+
-| -appOwner AppOwner | AppOwner (assumed to be current user if not
-|                    | specified)
-*---------------+--------------+
-| -containerId ContainerId | ContainerId (must be specified if node address is
-|                          | specified)
-*---------------+--------------+
-| -help | Help
-*---------------+--------------+
-| -nodeAddress NodeAddress | NodeAddress in the format nodename:port (must be
-|                          | specified if container id is specified)
-*---------------+--------------+
-
-  Dump the container logs
-
-
-** <<<node>>>
-
-  Usage: <<<yarn node [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -all | Works with -list to list all nodes.
-*---------------+--------------+
-| -list | Lists all running nodes. Supports optional use of -states to filter
-|       | nodes based on node state, and -all to list all nodes.
-*---------------+--------------+
-| -states States | Works with -list to filter nodes based on input
-|                | comma-separated list of node states.
-*---------------+--------------+
-| -status NodeId | Prints the status report of the node.
-*---------------+--------------+
-
-  Prints node report(s)
-
-** <<<queue>>>
-
-  Usage: <<<yarn queue [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -help | Help
-*---------------+--------------+
-| -status  QueueName | Prints the status of the queue.
-*---------------+--------------+
-
-  Prints queue information
-
-
-** <<<version>>>
-
-  Usage: <<<yarn version>>>
-
-  Prints the Hadoop version.
-
-* {Administration Commands}
-
-  Commands useful for administrators of a Hadoop cluster.
-
-** <<<daemonlog>>>
-
-  Usage:
-
----------------------------------
-   yarn daemonlog -getlevel <host:httpport> <classname>
-   yarn daemonlog -setlevel <host:httpport> <classname> <level>
----------------------------------
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -getlevel \<host:httpport\> \<classname\> | Prints the log level of the log identified  
-| | by a qualified \<classname\>, in the daemon running at \<host:httpport\>. This 
-| | command internally connects to http://\<host:httpport\>/logLevel?log=\<classname\>
-*---------------+--------------+
-| -setlevel \<host:httpport\> \<classname\> \<level\> | Sets the log level of the log 
-| | identified by a qualified \<classname\> in the daemon running at \<host:httpport\>. 
-| | This command internally connects to http://\<host:httpport\>/logLevel?log=\<classname\>&level=\<level\>
-*---------------+--------------+
-
-  Get/Set the log level for a Log identified by a qualified class name in the daemon.
-
-----
-  Example: $ bin/yarn daemonlog -setlevel 127.0.0.1:8088 org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl DEBUG
-----
-
-** <<<nodemanager>>>
-
-  Usage: <<<yarn nodemanager>>>
-
-  Start the NodeManager
-
-** <<<proxyserver>>>
-
-  Usage: <<<yarn proxyserver>>>
-
-  Start the web proxy server
-
-** <<<resourcemanager>>>
-
-  Usage: <<<yarn resourcemanager [-format-state-store]>>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -format-state-store | Formats the RMStateStore. This will clear the
-|                     | RMStateStore and is useful if past applications are no
-|                     | longer needed. This should be run only when the
-|                     | ResourceManager is not running.
-*---------------+--------------+
-
-  Start the ResourceManager
-
-
-** <<<rmadmin>>>
-
-  Usage:
-
-----
-  yarn rmadmin [-refreshQueues]
-               [-refreshNodes]
-               [-refreshUserToGroupsMapping] 
-               [-refreshSuperUserGroupsConfiguration]
-               [-refreshAdminAcls] 
-               [-refreshServiceAcl]
-               [-getGroups [username]]
-               [-transitionToActive [--forceactive] [--forcemanual] <serviceId>]
-               [-transitionToStandby [--forcemanual] <serviceId>]
-               [-failover [--forcefence] [--forceactive] <serviceId1> <serviceId2>]
-               [-getServiceState <serviceId>]
-               [-checkHealth <serviceId>]
-               [-help [cmd]]
-----
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -refreshQueues | Reload the queues' acls, states and scheduler specific
-|                | properties. ResourceManager will reload the mapred-queues
-|                | configuration file.
-*---------------+--------------+
-| -refreshNodes | Refresh the hosts information at the ResourceManager. |
-*---------------+--------------+
-| -refreshUserToGroupsMappings| Refresh user-to-groups mappings. |
-*---------------+--------------+
-| -refreshSuperUserGroupsConfiguration | Refresh superuser proxy groups
-|                                      | mappings.
-*---------------+--------------+
-| -refreshAdminAcls | Refresh acls for administration of ResourceManager |
-*---------------+--------------+
-| -refreshServiceAcl | Reload the service-level authorization policy file
-|                    | ResourceManager will reload the authorization policy
-|                    | file.
-*---------------+--------------+
-| -getGroups [username] | Get groups the specified user belongs to.
-*---------------+--------------+
-| -transitionToActive [--forceactive] [--forcemanual] \<serviceId\> |
-|               | Transitions the service into Active state.
-|               | Try to make the target active
-|               | without checking that there is no active node
-|               | if the --forceactive option is used.
-|               | This command can not be used if automatic failover is enabled.
-|               | Though you can override this by --forcemanual option,
-|               | you need caution.
-*---------------+--------------+
-| -transitionToStandby [--forcemanual] \<serviceId\> |
-|               | Transitions the service into Standby state.
-|               | This command can not be used if automatic failover is enabled.
-|               | Though you can override this by --forcemanual option,
-|               | you need caution.
-*---------------+--------------+
-| -failover [--forceactive] \<serviceId1\> \<serviceId2\> |
-|               | Initiate a failover from serviceId1 to serviceId2.
-|               | Try to failover to the target service even if it is not ready
-|               | if the --forceactive option is used.
-|               | This command can not be used if automatic failover is enabled.
-*---------------+--------------+
-| -getServiceState \<serviceId\> | Returns the state of the service.
-*---------------+--------------+
-| -checkHealth \<serviceId\> | Requests that the service perform a health
-|                            | check. The RMAdmin tool will exit with a
-|                            | non-zero exit code if the check fails.
-*---------------+--------------+
-| -help [cmd] | Displays help for the given command or all commands if none is
-|             | specified.
-*---------------+--------------+
-
-
-  Runs ResourceManager admin client
-
-** scmadmin
-
-  Usage: <<<yarn scmadmin [options] >>>
-
-*---------------+--------------+
-|| COMMAND_OPTIONS || Description                   |
-*---------------+--------------+
-| -help | Help
-*---------------+--------------+
-| -runCleanerTask | Runs the cleaner task
-*---------------+--------------+
-
-  Runs Shared Cache Manager admin client
-
-
-** sharedcachemanager
-
-  Usage: <<<yarn sharedcachemanager>>>
-
-  Start the Shared Cache Manager
-
-** timelineserver
-
-  Usage: <<<yarn timelineserver>>>
-
-  Start the TimeLineServer
-
-
-* Files
-
-** <<etc/hadoop/hadoop-env.sh>>
-
-    This file stores the global settings used by all Hadoop shell commands.
-
-** <<etc/hadoop/yarn-env.sh>>
-
-    This file stores overrides used by all YARN shell commands.
-
-** <<etc/hadoop/hadoop-user-functions.sh>>
-
-    This file allows for advanced users to override some shell functionality.
-
-** <<~/.hadooprc>>
-
-    This stores the personal environment for an individual user.  It is
-    processed after the <<<hadoop-env.sh>>>, <<<hadoop-user-functions.sh>>>, and <<<yarn-env.sh>>> files
-    and can contain the same settings.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
deleted file mode 100644
index 43e5b02..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/index.apt.vm
+++ /dev/null
@@ -1,82 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Apache Hadoop NextGen MapReduce
-  ---
-  ---
-  ${maven.build.timestamp}
-  
-MapReduce NextGen aka YARN aka MRv2
-
-  The new architecture introduced in hadoop-0.23, divides the two major 
-  functions of the JobTracker: resource management and job life-cycle management 
-  into separate components.
-
-  The new ResourceManager manages the global assignment of compute resources to 
-  applications and the per-application ApplicationMaster manages the 
-  application’s scheduling and coordination. 
-  
-  An application is either a single job in the sense of classic MapReduce jobs 
-  or a DAG of such jobs. 
-  
-  The ResourceManager and per-machine NodeManager daemon, which manages the 
-  user processes on that machine, form the computation fabric. 
-  
-  The per-application ApplicationMaster is, in effect, a framework specific 
-  library and is tasked with negotiating resources from the ResourceManager and 
-  working with the NodeManager(s) to execute and monitor the tasks.
-
-  More details are available in the {{{./YARN.html}Architecture}} document.
-
-
-Documentation Index
-
-* YARN
-
-  * {{{./YARN.html}YARN Architecture}}
- 
-  * {{{./CapacityScheduler.html}Capacity Scheduler}}
- 
-  * {{{./FairScheduler.html}Fair Scheduler}}
- 
-  * {{{./ResourceManagerRestart.htaml}ResourceManager Restart}}
- 
-  * {{{./ResourceManagerHA.html}ResourceManager HA}}
- 
-  * {{{./WebApplicationProxy.html}Web Application Proxy}}
- 
-  * {{{./TimelineServer.html}YARN Timeline Server}}
- 
-  * {{{./WritingYarnApplications.html}Writing YARN Applications}}
- 
-  * {{{./YarnCommands.html}YARN Commands}}
- 
-  * {{{hadoop-sls/SchedulerLoadSimulator.html}Scheduler Load Simulator}}
- 
-  * {{{./NodeManagerRestart.html}NodeManager Restart}}
- 
-  * {{{./DockerContainerExecutor.html}DockerContainerExecutor}}
- 
-  * {{{./NodeManagerCGroups.html}Using CGroups}}
- 
-  * {{{./SecureContainer.html}Secure Containers}}
- 
-  * {{{./registry/index.html}Registry}}
-
-* YARN REST APIs
-
-  * {{{./WebServicesIntro.html}Introduction}}
-
-  * {{{./ResourceManagerRest.html}Resource Manager}}
-
-  * {{{./NodeManagerRest.html}Node Manager}}

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
new file mode 100644
index 0000000..3c32cdd
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/CapacityScheduler.md
@@ -0,0 +1,186 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop: Capacity Scheduler
+==========================
+
+* [Purpose](#Purpose)
+* [Overview](#Overview)
+* [Features](#Features)
+* [Configuration](#Configuration)
+    * [Setting up `ResourceManager` to use `CapacityScheduler`](#Setting_up_ResourceManager_to_use_CapacityScheduler`)
+    * [Setting up queues](#Setting_up_queues)
+    * [Queue Properties](#Queue_Properties)
+    * [Other Properties](#Other_Properties)
+    * [Reviewing the configuration of the CapacityScheduler](#Reviewing_the_configuration_of_the_CapacityScheduler)
+* [Changing Queue Configuration](#Changing_Queue_Configuration)
+
+Purpose
+-------
+
+This document describes the `CapacityScheduler`, a pluggable scheduler for Hadoop which allows for multiple-tenants to securely share a large cluster such that their applications are allocated resources in a timely manner under constraints of allocated capacities.
+
+Overview
+--------
+
+The `CapacityScheduler` is designed to run Hadoop applications as a shared, multi-tenant cluster in an operator-friendly manner while maximizing the throughput and the utilization of the cluster.
+
+Traditionally each organization has it own private set of compute resources that have sufficient capacity to meet the organization's SLA under peak or near peak conditions. This generally leads to poor average utilization and overhead of managing multiple independent clusters, one per each organization. Sharing clusters between organizations is a cost-effective manner of running large Hadoop installations since this allows them to reap benefits of economies of scale without creating private clusters. However, organizations are concerned about sharing a cluster because they are worried about others using the resources that are critical for their SLAs.
+
+The `CapacityScheduler` is designed to allow sharing a large cluster while giving each organization capacity guarantees. The central idea is that the available resources in the Hadoop cluster are shared among multiple organizations who collectively fund the cluster based on their computing needs. There is an added benefit that an organization can access any excess capacity not being used by others. This provides elasticity for the organizations in a cost-effective manner.
+
+Sharing clusters across organizations necessitates strong support for multi-tenancy since each organization must be guaranteed capacity and safe-guards to ensure the shared cluster is impervious to single rouge application or user or sets thereof. The `CapacityScheduler` provides a stringent set of limits to ensure that a single application or user or queue cannot consume disproportionate amount of resources in the cluster. Also, the `CapacityScheduler` provides limits on initialized/pending applications from a single user and queue to ensure fairness and stability of the cluster.
+
+The primary abstraction provided by the `CapacityScheduler` is the concept of *queues*. These queues are typically setup by administrators to reflect the economics of the shared cluster.
+
+To provide further control and predictability on sharing of resources, the `CapacityScheduler` supports *hierarchical queues* to ensure resources are shared among the sub-queues of an organization before other queues are allowed to use free resources, there-by providing *affinity* for sharing free resources among applications of a given organization.
+
+Features
+--------
+
+The `CapacityScheduler` supports the following features:
+
+* **Hierarchical Queues** - Hierarchy of queues is supported to ensure resources are shared among the sub-queues of an organization before other queues are allowed to use free resources, there-by providing more control and predictability.
+
+* **Capacity Guarantees** - Queues are allocated a fraction of the capacity of the grid in the sense that a certain capacity of resources will be at their disposal. All applications submitted to a queue will have access to the capacity allocated to the queue. Adminstrators can configure soft limits and optional hard limits on the capacity allocated to each queue.
+
+* **Security** - Each queue has strict ACLs which controls which users can submit applications to individual queues. Also, there are safe-guards to ensure that users cannot view and/or modify applications from other users. Also, per-queue and system administrator roles are supported.
+
+* **Elasticity** - Free resources can be allocated to any queue beyond it's capacity. When there is demand for these resources from queues running below capacity at a future point in time, as tasks scheduled on these resources complete, they will be assigned to applications on queues running below the capacity (pre-emption is not supported). This ensures that resources are available in a predictable and elastic manner to queues, thus preventing artifical silos of resources in the cluster which helps utilization.
+
+* **Multi-tenancy** - Comprehensive set of limits are provided to prevent a single application, user and queue from monopolizing resources of the queue or the cluster as a whole to ensure that the cluster isn't overwhelmed.
+
+* **Operability**
+
+    * Runtime Configuration - The queue definitions and properties such as capacity, ACLs can be changed, at runtime, by administrators in a secure manner to minimize disruption to users. Also, a console is provided for users and administrators to view current allocation of resources to various queues in the system. Administrators can *add additional queues* at runtime, but queues cannot be *deleted* at runtime.
+
+    * Drain applications - Administrators can *stop* queues at runtime to ensure that while existing applications run to completion, no new applications can be submitted. If a queue is in `STOPPED` state, new applications cannot be submitted to *itself* or *any of its child queueus*. Existing applications continue to completion, thus the queue can be *drained* gracefully. Administrators can also *start* the stopped queues.
+
+* **Resource-based Scheduling** - Support for resource-intensive applications, where-in a application can optionally specify higher resource-requirements than the default, there-by accomodating applications with differing resource requirements. Currently, *memory* is the the resource requirement supported.
+
+Configuration
+-------------
+
+###Setting up `ResourceManager` to use `CapacityScheduler`
+
+  To configure the `ResourceManager` to use the `CapacityScheduler`, set the following property in the **conf/yarn-site.xml**:
+
+| Property | Value |
+|:---- |:---- |
+| `yarn.resourcemanager.scheduler.class` | `org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler` |
+
+###Setting up queues
+
+  `etc/hadoop/capacity-scheduler.xml` is the configuration file for the `CapacityScheduler`.
+
+  The `CapacityScheduler` has a pre-defined queue called *root*. All queueus in the system are children of the root queue.
+
+  Further queues can be setup by configuring `yarn.scheduler.capacity.root.queues` with a list of comma-separated child queues.
+
+  The configuration for `CapacityScheduler` uses a concept called *queue path* to configure the hierarchy of queues. The *queue path* is the full path of the queue's hierarchy, starting at *root*, with . (dot) as the delimiter.
+
+  A given queue's children can be defined with the configuration knob: `yarn.scheduler.capacity.<queue-path>.queues`. Children do not inherit properties directly from the parent unless otherwise noted.
+
+  Here is an example with three top-level child-queues `a`, `b` and `c` and some sub-queues for `a` and `b`:
+    
+```xml
+<property>
+  <name>yarn.scheduler.capacity.root.queues</name>
+  <value>a,b,c</value>
+  <description>The queues at the this level (root is the root queue).
+  </description>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.root.a.queues</name>
+  <value>a1,a2</value>
+  <description>The queues at the this level (root is the root queue).
+  </description>
+</property>
+
+<property>
+  <name>yarn.scheduler.capacity.root.b.queues</name>
+  <value>b1,b2,b3</value>
+  <description>The queues at the this level (root is the root queue).
+  </description>
+</property>
+```
+
+###Queue Properties
+
+  * Resource Allocation
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.scheduler.capacity.<queue-path>.capacity` | Queue *capacity* in percentage (%) as a float (e.g. 12.5). The sum of capacities for all queues, at each level, must be equal to 100. Applications in the queue may consume more resources than the queue's capacity if there are free resources, providing elasticity. |
+| `yarn.scheduler.capacity.<queue-path>.maximum-capacity` | Maximum queue capacity in percentage (%) as a float. This limits the *elasticity* for applications in the queue. Defaults to -1 which disables it. |
+| `yarn.scheduler.capacity.<queue-path>.minimum-user-limit-percent` | Each queue enforces a limit on the percentage of resources allocated to a user at any given time, if there is demand for resources. The user limit can vary between a minimum and maximum value. The the former (the minimum value) is set to this property value and the latter (the maximum value) depends on the number of users who have submitted applications. For e.g., suppose the value of this property is 25. If two users have submitted applications to a queue, no single user can use more than 50% of the queue resources. If a third user submits an application, no single user can use more than 33% of the queue resources. With 4 or more users, no user can use more than 25% of the queues resources. A value of 100 implies no user limits are imposed. The default is 100. Value is specified as a integer. |
+| `yarn.scheduler.capacity.<queue-path>.user-limit-factor` | The multiple of the queue capacity which can be configured to allow a single user to acquire more resources. By default this is set to 1 which ensures that a single user can never take more than the queue's configured capacity irrespective of how idle th cluster is. Value is specified as a float. |
+| `yarn.scheduler.capacity.<queue-path>.maximum-allocation-mb` | The per queue maximum limit of memory to allocate to each container request at the Resource Manager. This setting overrides the cluster configuration `yarn.scheduler.maximum-allocation-mb`. This value must be smaller than or equal to the cluster maximum. |
+| `yarn.scheduler.capacity.<queue-path>.maximum-allocation-vcores` | The per queue maximum limit of virtual cores to allocate to each container request at the Resource Manager. This setting overrides the cluster configuration `yarn.scheduler.maximum-allocation-vcores`. This value must be smaller than or equal to the cluster maximum. |
+
+  * Running and Pending Application Limits
+  
+  The `CapacityScheduler` supports the following parameters to control the running and pending applications:
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.scheduler.capacity.maximum-applications` / `yarn.scheduler.capacity.<queue-path>.maximum-applications` | Maximum number of applications in the system which can be concurrently active both running and pending. Limits on each queue are directly proportional to their queue capacities and user limits. This is a hard limit and any applications submitted when this limit is reached will be rejected. Default is 10000. This can be set for all queues with `yarn.scheduler.capacity.maximum-applications` and can also be overridden on a per queue basis by setting `yarn.scheduler.capacity.<queue-path>.maximum-applications`. Integer value expected. |
+| `yarn.scheduler.capacity.maximum-am-resource-percent` / `yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent` | Maximum percent of resources in the cluster which can be used to run application masters - controls number of concurrent active applications. Limits on each queue are directly proportional to their queue capacities and user limits. Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for all queues with `yarn.scheduler.capacity.maximum-am-resource-percent` and can also be overridden on a per queue basis by setting `yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent` |
+
+  * Queue Administration & Permissions
+  
+  The `CapacityScheduler` supports the following parameters to the administer the queues:
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.scheduler.capacity.<queue-path>.state` | The *state* of the queue. Can be one of `RUNNING` or `STOPPED`. If a queue is in `STOPPED` state, new applications cannot be submitted to *itself* or *any of its child queues*. Thus, if the *root* queue is `STOPPED` no applications can be submitted to the entire cluster. Existing applications continue to completion, thus the queue can be *drained* gracefully. Value is specified as Enumeration. |
+| `yarn.scheduler.capacity.root.<queue-path>.acl_submit_applications` | The *ACL* which controls who can *submit* applications to the given queue. If the given user/group has necessary ACLs on the given queue or *one of the parent queues in the hierarchy* they can submit applications. *ACLs* for this property *are* inherited from the parent queue if not specified. |
+| `yarn.scheduler.capacity.root.<queue-path>.acl_administer_queue` | The *ACL* which controls who can *administer* applications on the given queue. If the given user/group has necessary ACLs on the given queue or *one of the parent queues in the hierarchy* they can administer applications. *ACLs* for this property *are* inherited from the parent queue if not specified. |
+
+**Note:** An *ACL* is of the form *user1*, *user2spacegroup1*, *group2*. The special value of * implies *anyone*. The special value of *space* implies *no one*. The default is * for the root queue if not specified.
+
+###Other Properties
+
+  * Resource Calculator
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.scheduler.capacity.resource-calculator` | The ResourceCalculator implementation to be used to compare Resources in the scheduler. The default i.e. org.apache.hadoop.yarn.util.resource.DefaultResourseCalculator only uses Memory while DominantResourceCalculator uses Dominant-resource to compare multi-dimensional resources such as Memory, CPU etc. A Java ResourceCalculator class name is expected. |
+
+  * Data Locality
+
+| Property | Description |
+|:---- |:---- |
+| `yarn.scheduler.capacity.node-locality-delay` | Number of missed scheduling opportunities after which the CapacityScheduler attempts to schedule rack-local containers. Typically, this should be set to number of nodes in the cluster. By default is setting approximately number of nodes in one rack which is 40. Positive integer value is expected. |
+
+###Reviewing the configuration of the CapacityScheduler
+
+  Once the installation and configuration is completed, you can review it after starting the YARN cluster from the web-ui.
+
+  * Start the YARN cluster in the normal manner.
+
+  * Open the `ResourceManager` web UI.
+
+  * The */scheduler* web-page should show the resource usages of individual queues.
+
+Changing Queue Configuration
+----------------------------
+
+Changing queue properties and adding new queues is very simple. You need to edit **conf/capacity-scheduler.xml** and run *yarn rmadmin -refreshQueues*.
+
+    $ vi $HADOOP_CONF_DIR/capacity-scheduler.xml
+    $ $HADOOP_YARN_HOME/bin/yarn rmadmin -refreshQueues
+
+**Note:** Queues cannot be *deleted*, only addition of new queues is supported - the updated queue configuration should be a valid one i.e. queue-capacity at each *level* should be equal to 100%.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/2e44b75f/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainerExecutor.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainerExecutor.md.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainerExecutor.md.vm
new file mode 100644
index 0000000..fbfe04b
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainerExecutor.md.vm
@@ -0,0 +1,154 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Docker Container Executor
+=========================
+
+* [Overview](#Overview)
+* [Cluster Configuration](#Cluster_Configuration)
+* [Tips for connecting to a secure docker repository](#Tips_for_connecting_to_a_secure_docker_repository)
+* [Job Configuration](#Job_Configuration)
+* [Docker Image Requirements](#Docker_Image_Requirements)
+* [Working example of yarn launched docker containers](#Working_example_of_yarn_launched_docker_containers)
+
+Overview
+--------
+
+[Docker](https://www.docker.io/) combines an easy-to-use interface to Linux containers with easy-to-construct image files for those containers. In short, Docker launches very light weight virtual machines.
+
+The Docker Container Executor (DCE) allows the YARN NodeManager to launch YARN containers into Docker containers. Users can specify the Docker images they want for their YARN containers. These containers provide a custom software environment in which the user's code runs, isolated from the software environment of the NodeManager. These containers can include special libraries needed by the application, and they can have different versions of Perl, Python, and even Java than what is installed on the NodeManager. Indeed, these containers can run a different flavor of Linux than what is running on the NodeManager -- although the YARN container must define all the environments and libraries needed to run the job, nothing will be shared with the NodeManager.
+
+Docker for YARN provides both consistency (all YARN containers will have the same software environment) and isolation (no interference with whatever is installed on the physical machine).
+
+Cluster Configuration
+---------------------
+
+Docker Container Executor runs in non-secure mode of HDFS and YARN. It will not run in secure mode, and will exit if it detects secure mode.
+
+The DockerContainerExecutor requires Docker daemon to be running on the NodeManagers, and the Docker client installed and able to start Docker containers. To prevent timeouts while starting jobs, the Docker images to be used by a job should already be downloaded in the NodeManagers. Here's an example of how this can be done:
+
+    sudo docker pull sequenceiq/hadoop-docker:2.4.1
+
+This should be done as part of the NodeManager startup.
+
+The following properties must be set in yarn-site.xml:
+
+```xml
+<property>
+ <name>yarn.nodemanager.docker-container-executor.exec-name</name>
+  <value>/usr/bin/docker</value>
+  <description>
+     Name or path to the Docker client. This is a required parameter. If this is empty,
+     user must pass an image name as part of the job invocation(see below).
+  </description>
+</property>
+
+<property>
+  <name>yarn.nodemanager.container-executor.class</name>
+  <value>org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor</value>
+  <description>
+     This is the container executor setting that ensures that all
+jobs are started with the DockerContainerExecutor.
+  </description>
+</property>
+```
+
+Administrators should be aware that DCE doesn't currently provide user name-space isolation. This means, in particular, that software running as root in the YARN container will have root privileges in the underlying NodeManager. Put differently, DCE currently provides no better security guarantees than YARN's Default Container Executor. In fact, DockerContainerExecutor will exit if it detects secure yarn.
+
+Tips for connecting to a secure docker repository
+-------------------------------------------------
+
+By default, docker images are pulled from the docker public repository. The format of a docker image url is: *username*/*image\_name*. For example, sequenceiq/hadoop-docker:2.4.1 is an image in docker public repository that contains java and hadoop.
+
+If you want your own private repository, you provide the repository url instead of your username. Therefore, the image url becomes: *private\_repo\_url*/*image\_name*. For example, if your repository is on localhost:8080, your images would be like: localhost:8080/hadoop-docker
+
+To connect to a secure docker repository, you can use the following invocation:
+
+```
+    docker login [OPTIONS] [SERVER]
+
+    Register or log in to a Docker registry server, if no server is specified
+    "https://index.docker.io/v1/" is the default.
+
+  -e, --email=""       Email
+  -p, --password=""    Password
+  -u, --username=""    Username
+```
+
+If you want to login to a self-hosted registry you can specify this by adding the server name.
+
+    docker login <private_repo_url>
+
+This needs to be run as part of the NodeManager startup, or as a cron job if the login session expires periodically. You can login to multiple docker repositories from the same NodeManager, but all your users will have access to all your repositories, as at present the DockerContainerExecutor does not support per-job docker login.
+
+Job Configuration
+-----------------
+
+Currently you cannot configure any of the Docker settings with the job configuration. You can provide Mapper, Reducer, and ApplicationMaster environment overrides for the docker images, using the following 3 JVM properties respectively(only for MR jobs):
+
+* `mapreduce.map.env`: You can override the mapper's image by passing `yarn.nodemanager.docker-container-executor.image-name`=*your_image_name* to this JVM property.
+
+* `mapreduce.reduce.env`: You can override the reducer's image by passing `yarn.nodemanager.docker-container-executor.image-name`=*your_image_name* to this JVM property.
+
+* `yarn.app.mapreduce.am.env`: You can override the ApplicationMaster's image by passing `yarn.nodemanager.docker-container-executor.image-name`=*your_image_name* to this JVM property.
+
+Docker Image Requirements
+-------------------------
+
+The Docker Images used for YARN containers must meet the following requirements:
+
+The distro and version of Linux in your Docker Image can be quite different from that of your NodeManager. (Docker does have a few limitations in this regard, but you're not likely to hit them.) However, if you're using the MapReduce framework, then your image will need to be configured for running Hadoop. Java must be installed in the container, and the following environment variables must be defined in the image: JAVA_HOME, HADOOP_COMMON_PATH, HADOOP_HDFS_HOME, HADOOP_MAPRED_HOME, HADOOP_YARN_HOME, and HADOOP_CONF_DIR
+
+Working example of yarn launched docker containers
+--------------------------------------------------
+
+The following example shows how to run teragen using DockerContainerExecutor.
+
+Step 1. First ensure that YARN is properly configured with DockerContainerExecutor(see above).
+
+```xml
+<property>
+ <name>yarn.nodemanager.docker-container-executor.exec-name</name>
+  <value>docker -H=tcp://0.0.0.0:4243</value>
+  <description>
+     Name or path to the Docker client. The tcp socket must be
+     where docker daemon is listening.
+  </description>
+</property>
+
+<property>
+  <name>yarn.nodemanager.container-executor.class</name>
+  <value>org.apache.hadoop.yarn.server.nodemanager.DockerContainerExecutor</value>
+  <description>
+     This is the container executor setting that ensures that all
+jobs are started with the DockerContainerExecutor.
+  </description>
+</property>
+```
+
+Step 2. Pick a custom Docker image if you want. In this example, we'll use sequenceiq/hadoop-docker:2.4.1 from the docker hub repository. It has jdk, hadoop, and all the previously mentioned environment variables configured.
+
+Step 3. Run.
+
+```bash
+hadoop jar $HADOOP_PREFIX/share/hadoop/mapreduce/hadoop-mapreduce-examples-${project.version}.jar \
+  teragen \
+     -Dmapreduce.map.env="yarn.nodemanager.docker-container-executor.image-name=sequenceiq/hadoop-docker:2.4.1" \
+   -Dyarn.app.mapreduce.am.env="yarn.nodemanager.docker-container-executor.image-name=sequenceiq/hadoop-docker:2.4.1" \
+  1000 \
+  teragen_out_dir
+```
+
+  Once it succeeds, you can check the yarn debug logs to verify that docker indeed has launched containers.
+


[16/43] hadoop git commit: HADOOP-10774. Update KerberosTestUtils for hadoop-auth tests when using IBM Java (sangamesh via aw)

Posted by zj...@apache.org.
HADOOP-10774. Update KerberosTestUtils for hadoop-auth tests when using IBM Java (sangamesh via aw)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/b01d3433
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/b01d3433
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/b01d3433

Branch: refs/heads/YARN-2928
Commit: b01d3433aefb68a0f66a48ac9cae7d32463ab95e
Parents: 039366e
Author: Allen Wittenauer <aw...@apache.org>
Authored: Sat Feb 28 23:22:06 2015 -0800
Committer: Allen Wittenauer <aw...@apache.org>
Committed: Sat Feb 28 23:22:06 2015 -0800

----------------------------------------------------------------------
 .../authentication/KerberosTestUtils.java       | 40 ++++++++++++++------
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 ++
 2 files changed, 32 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/b01d3433/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java
index 7629a30..8fc08e2 100644
--- a/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java
+++ b/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/KerberosTestUtils.java
@@ -32,12 +32,14 @@ import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.Callable;
 
+import static org.apache.hadoop.util.PlatformName.IBM_JAVA;
+
 /**
  * Test helper class for Java Kerberos setup.
  */
 public class KerberosTestUtils {
   private static String keytabFile = new File(System.getProperty("test.dir", "target"),
-          UUID.randomUUID().toString()).toString();
+          UUID.randomUUID().toString()).getAbsolutePath();
 
   public static String getRealm() {
     return "EXAMPLE.COM";
@@ -65,18 +67,34 @@ public class KerberosTestUtils {
     @Override
     public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
       Map<String, String> options = new HashMap<String, String>();
-      options.put("keyTab", KerberosTestUtils.getKeytabFile());
-      options.put("principal", principal);
-      options.put("useKeyTab", "true");
-      options.put("storeKey", "true");
-      options.put("doNotPrompt", "true");
-      options.put("useTicketCache", "true");
-      options.put("renewTGT", "true");
-      options.put("refreshKrb5Config", "true");
-      options.put("isInitiator", "true");
+      if (IBM_JAVA) {
+        options.put("useKeytab", KerberosTestUtils.getKeytabFile().startsWith("file://") ?   
+                    KerberosTestUtils.getKeytabFile() : "file://" +  KerberosTestUtils.getKeytabFile());
+        options.put("principal", principal);
+        options.put("refreshKrb5Config", "true");
+        options.put("credsType", "both");
+      } else {
+        options.put("keyTab", KerberosTestUtils.getKeytabFile());
+        options.put("principal", principal);
+        options.put("useKeyTab", "true");
+        options.put("storeKey", "true");
+        options.put("doNotPrompt", "true");
+        options.put("useTicketCache", "true");
+        options.put("renewTGT", "true");
+        options.put("refreshKrb5Config", "true");
+        options.put("isInitiator", "true");
+      } 
       String ticketCache = System.getenv("KRB5CCNAME");
       if (ticketCache != null) {
-        options.put("ticketCache", ticketCache);
+        if (IBM_JAVA) {
+          // IBM JAVA only respect system property and not env variable
+          // The first value searched when "useDefaultCcache" is used.
+          System.setProperty("KRB5CCNAME", ticketCache);
+          options.put("useDefaultCcache", "true");
+          options.put("renewTGT", "true");
+        } else {
+          options.put("ticketCache", ticketCache);
+        }
       }
       options.put("debug", "true");
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/b01d3433/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index 74bf558..3c4dc99 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -403,6 +403,9 @@ Trunk (Unreleased)
 
     HADOOP-11637. bash location hard-coded in shell scripts (aw)
 
+    HADOOP-10774. Update KerberosTestUtils for hadoop-auth tests when using
+    IBM Java (sangamesh via aw)
+
   OPTIMIZATIONS
 
     HADOOP-7761. Improve the performance of raw comparisons. (todd)


[24/43] hadoop git commit: HADOOP-11658. Externalize io.compression.codecs property. Contributed by Kai Zheng.

Posted by zj...@apache.org.
HADOOP-11658. Externalize io.compression.codecs property. Contributed by Kai Zheng.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/ca1c00bf
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/ca1c00bf
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/ca1c00bf

Branch: refs/heads/YARN-2928
Commit: ca1c00bf814a8b8290a81d06b1f4918c36c7d9e0
Parents: cbb4925
Author: Akira Ajisaka <aa...@apache.org>
Authored: Mon Mar 2 01:09:54 2015 -0800
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Mon Mar 2 01:12:44 2015 -0800

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt |  3 +++
 .../hadoop/fs/CommonConfigurationKeys.java      | 17 +++++++++++-----
 .../io/compress/CompressionCodecFactory.java    | 21 +++++++++++++-------
 .../hadoop/io/compress/TestCodecFactory.java    |  3 ++-
 4 files changed, 31 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca1c00bf/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index 4c0c375..b8ed286 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -642,6 +642,9 @@ Release 2.7.0 - UNRELEASED
     HADOOP-10976. moving the source code of hadoop-tools docs to the
     directory under hadoop-tools (Masatake Iwasaki via aw)
 
+    HADOOP-11658. Externalize io.compression.codecs property.
+    (Kai Zheng via aajisaka)
+
   OPTIMIZATIONS
 
     HADOOP-11323. WritableComparator#compare keeps reference to byte array.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca1c00bf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
index 442dc7d..7575496 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
@@ -91,17 +91,24 @@ public class CommonConfigurationKeys extends CommonConfigurationKeysPublic {
   public static final String IPC_CALLQUEUE_IMPL_KEY = "callqueue.impl";
   public static final String IPC_CALLQUEUE_IDENTITY_PROVIDER_KEY = "identity-provider.impl";
 
+  /** This is for specifying the implementation for the mappings from
+   * hostnames to the racks they belong to
+   */
+  public static final String  NET_TOPOLOGY_CONFIGURED_NODE_MAPPING_KEY =
+      "net.topology.configured.node.mapping";
+
+  /**
+   * Supported compression codec classes
+   */
+  public static final String IO_COMPRESSION_CODECS_KEY = "io.compression.codecs";
+
   /** Internal buffer size for Lzo compressor/decompressors */
   public static final String  IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY =
     "io.compression.codec.lzo.buffersize";
+
   /** Default value for IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_KEY */
   public static final int     IO_COMPRESSION_CODEC_LZO_BUFFERSIZE_DEFAULT =
     64*1024;
-  /** This is for specifying the implementation for the mappings from
-   * hostnames to the racks they belong to
-   */
-  public static final String  NET_TOPOLOGY_CONFIGURED_NODE_MAPPING_KEY =
-    "net.topology.configured.node.mapping";
 
   /** Internal buffer size for Snappy compressor/decompressors */
   public static final String IO_COMPRESSION_CODEC_SNAPPY_BUFFERSIZE_KEY =

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca1c00bf/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
index eb35759..7476a15 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
@@ -24,6 +24,7 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.util.ReflectionUtils;
 
@@ -106,7 +107,8 @@ public class CompressionCodecFactory {
    * @param conf the configuration to look in
    * @return a list of the {@link CompressionCodec} classes
    */
-  public static List<Class<? extends CompressionCodec>> getCodecClasses(Configuration conf) {
+  public static List<Class<? extends CompressionCodec>> getCodecClasses(
+      Configuration conf) {
     List<Class<? extends CompressionCodec>> result
       = new ArrayList<Class<? extends CompressionCodec>>();
     // Add codec classes discovered via service loading
@@ -118,7 +120,8 @@ public class CompressionCodecFactory {
       }
     }
     // Add codec classes from configuration
-    String codecsString = conf.get("io.compression.codecs");
+    String codecsString = conf.get(
+        CommonConfigurationKeys.IO_COMPRESSION_CODECS_KEY);
     if (codecsString != null) {
       StringTokenizer codecSplit = new StringTokenizer(codecsString, ",");
       while (codecSplit.hasMoreElements()) {
@@ -161,7 +164,7 @@ public class CompressionCodecFactory {
         buf.append(itr.next().getName());
       }
     }
-    conf.set("io.compression.codecs", buf.toString());   
+    conf.set(CommonConfigurationKeys.IO_COMPRESSION_CODECS_KEY, buf.toString());
   }
   
   /**
@@ -172,7 +175,8 @@ public class CompressionCodecFactory {
     codecs = new TreeMap<String, CompressionCodec>();
     codecsByClassName = new HashMap<String, CompressionCodec>();
     codecsByName = new HashMap<String, CompressionCodec>();
-    List<Class<? extends CompressionCodec>> codecClasses = getCodecClasses(conf);
+    List<Class<? extends CompressionCodec>> codecClasses =
+        getCodecClasses(conf);
     if (codecClasses == null || codecClasses.isEmpty()) {
       addCodec(new GzipCodec());
       addCodec(new DefaultCodec());      
@@ -193,7 +197,8 @@ public class CompressionCodecFactory {
     CompressionCodec result = null;
     if (codecs != null) {
       String filename = file.getName();
-      String reversedFilename = new StringBuilder(filename).reverse().toString();
+      String reversedFilename =
+          new StringBuilder(filename).reverse().toString();
       SortedMap<String, CompressionCodec> subMap = 
         codecs.headMap(reversedFilename);
       if (!subMap.isEmpty()) {
@@ -239,7 +244,8 @@ public class CompressionCodecFactory {
       }
       CompressionCodec codec = getCodecByClassName(codecName);
       if (codec == null) {
-        // trying to get the codec by name in case the name was specified instead a class
+        // trying to get the codec by name in case the name was specified
+        // instead a class
         codec = codecsByName.get(codecName.toLowerCase());
       }
       return codec;
@@ -260,7 +266,8 @@ public class CompressionCodecFactory {
      * @param codecName the canonical class name of the codec
      * @return the codec class
      */
-    public Class<? extends CompressionCodec> getCodecClassByName(String codecName) {
+    public Class<? extends CompressionCodec> getCodecClassByName(
+        String codecName) {
       CompressionCodec codec = getCodecByName(codecName);
       if (codec == null) {
         return null;

http://git-wip-us.apache.org/repos/asf/hadoop/blob/ca1c00bf/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java
index 7601211..3b81a3f 100644
--- a/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java
+++ b/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java
@@ -24,6 +24,7 @@ import java.io.OutputStream;
 import java.util.*;
 
 import junit.framework.TestCase;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.conf.Configuration;
 
@@ -258,7 +259,7 @@ public class TestCodecFactory extends TestCase {
     checkCodec("overridden factory for gzip codec", NewGzipCodec.class, codec);
     
     Configuration conf = new Configuration();
-    conf.set("io.compression.codecs", 
+    conf.set(CommonConfigurationKeys.IO_COMPRESSION_CODECS_KEY,
         "   org.apache.hadoop.io.compress.GzipCodec   , " +
         "    org.apache.hadoop.io.compress.DefaultCodec  , " +
         " org.apache.hadoop.io.compress.BZip2Codec   ");


[21/43] hadoop git commit: HADOOP-11615. Update ServiceLevelAuth.md for YARN. Contributed by Brahma Reddy Battula.

Posted by zj...@apache.org.
HADOOP-11615. Update ServiceLevelAuth.md for YARN. Contributed by Brahma Reddy Battula.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/dd9cd079
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/dd9cd079
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/dd9cd079

Branch: refs/heads/YARN-2928
Commit: dd9cd0797c265edfa7c3f18d2efce7c8f2801a6d
Parents: 30e73eb
Author: Akira Ajisaka <aa...@apache.org>
Authored: Sun Mar 1 22:16:06 2015 -0800
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Sun Mar 1 22:16:06 2015 -0800

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt    |  3 +++
 .../src/site/markdown/ServiceLevelAuth.md          | 17 ++++++++---------
 2 files changed, 11 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd9cd079/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index b1a7a7d..4c0c375 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -1025,6 +1025,9 @@ Release 2.7.0 - UNRELEASED
     HADOOP-11634. Description of webhdfs' principal/keytab should switch places
     each other. (Brahma Reddy Battula via ozawa)
 
+    HADOOP-11615. Update ServiceLevelAuth.md for YARN.
+    (Brahma Reddy Battula via aajisaka)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/dd9cd079/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md b/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
index ae41b47..e0017d4 100644
--- a/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
@@ -68,10 +68,9 @@ This section lists the various Hadoop services and their configuration knobs:
 | security.datanode.protocol.acl | ACL for DatanodeProtocol, which is used by datanodes to communicate with the namenode. |
 | security.inter.datanode.protocol.acl | ACL for InterDatanodeProtocol, the inter-datanode protocol for updating generation timestamp. |
 | security.namenode.protocol.acl | ACL for NamenodeProtocol, the protocol used by the secondary namenode to communicate with the namenode. |
-| security.inter.tracker.protocol.acl | ACL for InterTrackerProtocol, used by the tasktrackers to communicate with the jobtracker. |
-| security.job.submission.protocol.acl | ACL for JobSubmissionProtocol, used by job clients to communciate with the jobtracker for job submission, querying job status etc. |
-| security.task.umbilical.protocol.acl | ACL for TaskUmbilicalProtocol, used by the map and reduce tasks to communicate with the parent tasktracker. |
-| security.refresh.policy.protocol.acl | ACL for RefreshAuthorizationPolicyProtocol, used by the dfsadmin and mradmin commands to refresh the security policy in-effect. |
+| security.job.client.protocol.acl | ACL for JobSubmissionProtocol, used by job clients to communciate with the resourcemanager for job submission, querying job status etc. |
+| security.job.task.protocol.acl | ACL for TaskUmbilicalProtocol, used by the map and reduce tasks to communicate with the parent nodemanager. |
+| security.refresh.policy.protocol.acl | ACL for RefreshAuthorizationPolicyProtocol, used by the dfsadmin and rmadmin commands to refresh the security policy in-effect. |
 | security.ha.service.protocol.acl | ACL for HAService protocol used by HAAdmin to manage the active and stand-by states of namenode. |
 
 ### Access Control Lists
@@ -98,15 +97,15 @@ If access control list is not defined for a service, the value of `security.serv
 
 ### Refreshing Service Level Authorization Configuration
 
-The service-level authorization configuration for the NameNode and JobTracker can be changed without restarting either of the Hadoop master daemons. The cluster administrator can change `$HADOOP_CONF_DIR/hadoop-policy.xml` on the master nodes and instruct the NameNode and JobTracker to reload their respective configurations via the `-refreshServiceAcl` switch to `dfsadmin` and `mradmin` commands respectively.
+The service-level authorization configuration for the NameNode and ResourceManager can be changed without restarting either of the Hadoop master daemons. The cluster administrator can change `$HADOOP_CONF_DIR/hadoop-policy.xml` on the master nodes and instruct the NameNode and ResourceManager to reload their respective configurations via the `-refreshServiceAcl` switch to `dfsadmin` and `rmadmin` commands respectively.
 
 Refresh the service-level authorization configuration for the NameNode:
 
-       $ bin/hadoop dfsadmin -refreshServiceAcl
+       $ bin/hdfs dfsadmin -refreshServiceAcl
 
-Refresh the service-level authorization configuration for the JobTracker:
+Refresh the service-level authorization configuration for the ResourceManager:
 
-       $ bin/hadoop mradmin -refreshServiceAcl
+       $ bin/yarn rmadmin -refreshServiceAcl
 
 Of course, one can use the `security.refresh.policy.protocol.acl` property in `$HADOOP_CONF_DIR/hadoop-policy.xml` to restrict access to the ability to refresh the service-level authorization configuration to certain users/groups.
 
@@ -125,7 +124,7 @@ Of course, one can use the `security.refresh.policy.protocol.acl` property in `$
 Allow only users `alice`, `bob` and users in the `mapreduce` group to submit jobs to the MapReduce cluster:
 
     <property>
-         <name>security.job.submission.protocol.acl</name>
+         <name>security.job.client.protocol.acl</name>
          <value>alice,bob mapreduce</value>
     </property>
 


[36/43] hadoop git commit: MAPREDUCE-6268. Fix typo in Task Attempt API's URL. Contributed by Ryu Kobayashi.

Posted by zj...@apache.org.
MAPREDUCE-6268. Fix typo in Task Attempt API's URL. Contributed by Ryu Kobayashi.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/742f9d90
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/742f9d90
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/742f9d90

Branch: refs/heads/YARN-2928
Commit: 742f9d90c00f823ad7fea7e79702fcf238fa5721
Parents: d1c6acc
Author: Tsuyoshi Ozawa <oz...@apache.org>
Authored: Tue Mar 3 16:21:16 2015 +0900
Committer: Tsuyoshi Ozawa <oz...@apache.org>
Committed: Tue Mar 3 16:21:16 2015 +0900

----------------------------------------------------------------------
 hadoop-mapreduce-project/CHANGES.txt                              | 3 +++
 .../src/site/markdown/HistoryServerRest.md                        | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/742f9d90/hadoop-mapreduce-project/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/CHANGES.txt b/hadoop-mapreduce-project/CHANGES.txt
index ccd24a6..5fd7d30 100644
--- a/hadoop-mapreduce-project/CHANGES.txt
+++ b/hadoop-mapreduce-project/CHANGES.txt
@@ -399,6 +399,9 @@ Release 2.7.0 - UNRELEASED
     MAPREDUCE-6223. TestJobConf#testNegativeValueForTaskVmem failures. 
     (Varun Saxena via kasha)
 
+    MAPREDUCE-6268. Fix typo in Task Attempt API's URL. (Ryu Kobayashi
+    via ozawa)
+
 Release 2.6.1 - UNRELEASED
 
   INCOMPATIBLE CHANGES

http://git-wip-us.apache.org/repos/asf/hadoop/blob/742f9d90/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/markdown/HistoryServerRest.md
----------------------------------------------------------------------
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/markdown/HistoryServerRest.md b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/markdown/HistoryServerRest.md
index 8a78754..b4ce00a 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/markdown/HistoryServerRest.md
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/site/markdown/HistoryServerRest.md
@@ -1889,7 +1889,7 @@ A Task Attempt resource contains information about a particular task attempt wit
 
 Use the following URI to obtain an Task Attempt Object, from a task identified by the attemptid value.
 
-      * http://<history server http address:port>/ws/v1/history/mapreduce/jobs/{jobid}/tasks/{taskid}/attempt/{attemptid}
+      * http://<history server http address:port>/ws/v1/history/mapreduce/jobs/{jobid}/tasks/{taskid}/attempts/{attemptid}
 
 #### HTTP Operations Supported
 


[28/43] hadoop git commit: YARN-3265. Fixed a deadlock in CapacityScheduler by always passing a queue's available resource-limit from the parent queue. Contributed by Wangda Tan.

Posted by zj...@apache.org.
http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
index ead5719..a5a2e5f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestLeafQueue.java
@@ -73,6 +73,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEven
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.QueueMetrics;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.AppAddedSchedulerEvent;
@@ -294,11 +295,13 @@ public class TestLeafQueue {
 	  //Verify the value for getAMResourceLimit for queues with < .1 maxcap
 	  Resource clusterResource = Resource.newInstance(50 * GB, 50);
 	  
-	  a.updateClusterResource(clusterResource);
+    a.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 	  assertEquals(Resource.newInstance(1 * GB, 1), 
 	    a.getAMResourceLimit());
     
-	  b.updateClusterResource(clusterResource);
+	  b.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 	  assertEquals(Resource.newInstance(5 * GB, 1), 
 	    b.getAMResourceLimit());
   }
@@ -347,7 +350,8 @@ public class TestLeafQueue {
     // Start testing...
     
     // Only 1 container
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(
         (int)(node_0.getTotalResource().getMemory() * a.getCapacity()) - (1*GB),
         a.getMetrics().getAvailableMB());
@@ -482,7 +486,8 @@ public class TestLeafQueue {
     // Start testing...
     
     // Only 1 container
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(1*GB, a.getUsedResources().getMemory());
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -492,7 +497,8 @@ public class TestLeafQueue {
 
     // Also 2nd -> minCapacity = 1024 since (.1 * 8G) < minAlloc, also
     // you can get one container more than user-limit
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -500,7 +506,8 @@ public class TestLeafQueue {
     assertEquals(2*GB, a.getMetrics().getAllocatedMB());
     
     // Can't allocate 3rd due to user-limit
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -509,7 +516,8 @@ public class TestLeafQueue {
     
     // Bump up user-limit-factor, now allocate should work
     a.setUserLimitFactor(10);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(3*GB, a.getUsedResources().getMemory());
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -517,7 +525,8 @@ public class TestLeafQueue {
     assertEquals(3*GB, a.getMetrics().getAllocatedMB());
 
     // One more should work, for app_1, due to user-limit-factor
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(4*GB, a.getUsedResources().getMemory());
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -527,7 +536,8 @@ public class TestLeafQueue {
     // Test max-capacity
     // Now - no more allocs since we are at max-cap
     a.setMaxCapacity(0.5f);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(4*GB, a.getUsedResources().getMemory());
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -642,19 +652,22 @@ public class TestLeafQueue {
 //            recordFactory)));
 
     // 1 container to user_0
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
     // Again one to user_0 since he hasn't exceeded user limit yet
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(3*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
 
     // One more to user_0 since he is the only active user
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(4*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(2*GB, app_1.getCurrentConsumption().getMemory());
@@ -705,7 +718,8 @@ public class TestLeafQueue {
     assertEquals("There should only be 1 active user!",
         1, qb.getActiveUsersManager().getNumActiveUsers());
     //get headroom
-    qb.assignContainers(clusterResource, node_0, false);
+    qb.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     qb.computeUserLimitAndSetHeadroom(app_0, clusterResource, app_0
         .getResourceRequest(u0Priority, ResourceRequest.ANY).getCapability(),
         null);
@@ -724,7 +738,8 @@ public class TestLeafQueue {
         TestUtils.createResourceRequest(ResourceRequest.ANY, 4*GB, 1, true,
             u1Priority, recordFactory)));
     qb.submitApplicationAttempt(app_2, user_1);
-    qb.assignContainers(clusterResource, node_1, false);
+    qb.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     qb.computeUserLimitAndSetHeadroom(app_0, clusterResource, app_0
         .getResourceRequest(u0Priority, ResourceRequest.ANY).getCapability(),
         null);
@@ -766,8 +781,10 @@ public class TestLeafQueue {
              u1Priority, recordFactory)));
     qb.submitApplicationAttempt(app_1, user_0);
     qb.submitApplicationAttempt(app_3, user_1);
-    qb.assignContainers(clusterResource, node_0, false);
-    qb.assignContainers(clusterResource, node_0, false);
+    qb.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
+    qb.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     qb.computeUserLimitAndSetHeadroom(app_3, clusterResource, app_3
         .getResourceRequest(u1Priority, ResourceRequest.ANY).getCapability(),
         null);
@@ -785,7 +802,8 @@ public class TestLeafQueue {
     app_4.updateResourceRequests(Collections.singletonList(
               TestUtils.createResourceRequest(ResourceRequest.ANY, 6*GB, 1, true,
                       u0Priority, recordFactory)));
-    qb.assignContainers(clusterResource, node_1, false);
+    qb.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     qb.computeUserLimitAndSetHeadroom(app_4, clusterResource, app_4
         .getResourceRequest(u0Priority, ResourceRequest.ANY).getCapability(),
         null);
@@ -857,7 +875,8 @@ public class TestLeafQueue {
             TestUtils.createResourceRequest(ResourceRequest.ANY, 1*GB, 1, true,
                 priority, recordFactory)));
 
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(1*GB, a.getUsedResources().getMemory());
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -873,7 +892,8 @@ public class TestLeafQueue {
         TestUtils.createResourceRequest(ResourceRequest.ANY, 1*GB, 2, true,
             priority, recordFactory)));
 
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -961,7 +981,8 @@ public class TestLeafQueue {
         1, a.getActiveUsersManager().getNumActiveUsers());
 
     // 1 container to user_0
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -971,7 +992,8 @@ public class TestLeafQueue {
       // the application is not yet active
 
     // Again one to user_0 since he hasn't exceeded user limit yet
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(3*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -987,7 +1009,8 @@ public class TestLeafQueue {
 
     // No more to user_0 since he is already over user-limit
     // and no more containers to queue since it's already at max-cap
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(3*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(1*GB, app_1.getCurrentConsumption().getMemory());
@@ -1000,7 +1023,8 @@ public class TestLeafQueue {
         TestUtils.createResourceRequest(ResourceRequest.ANY, 1*GB, 0, true,
             priority, recordFactory)));
     assertEquals(1, a.getActiveUsersManager().getNumActiveUsers());
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(0*GB, app_2.getHeadroom().getMemory());   // hit queue max-cap 
   }
 
@@ -1070,21 +1094,24 @@ public class TestLeafQueue {
      */
     
     // Only 1 container
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(1*GB, a.getUsedResources().getMemory());
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
     // Also 2nd -> minCapacity = 1024 since (.1 * 8G) < minAlloc, also
     // you can get one container more than user-limit
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
     
     // Can't allocate 3rd due to user-limit
     a.setUserLimit(25);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1102,7 +1129,8 @@ public class TestLeafQueue {
     // Now allocations should goto app_2 since 
     // user_0 is at limit inspite of high user-limit-factor
     a.setUserLimitFactor(10);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1111,7 +1139,8 @@ public class TestLeafQueue {
 
     // Now allocations should goto app_0 since 
     // user_0 is at user-limit not above it
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(6*GB, a.getUsedResources().getMemory());
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1121,7 +1150,8 @@ public class TestLeafQueue {
     // Test max-capacity
     // Now - no more allocs since we are at max-cap
     a.setMaxCapacity(0.5f);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(6*GB, a.getUsedResources().getMemory());
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1132,7 +1162,8 @@ public class TestLeafQueue {
     // Now, allocations should goto app_3 since it's under user-limit 
     a.setMaxCapacity(1.0f);
     a.setUserLimitFactor(1);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(7*GB, a.getUsedResources().getMemory()); 
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1140,7 +1171,8 @@ public class TestLeafQueue {
     assertEquals(1*GB, app_3.getCurrentConsumption().getMemory());
 
     // Now we should assign to app_3 again since user_2 is under user-limit
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8*GB, a.getUsedResources().getMemory()); 
     assertEquals(3*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1239,7 +1271,8 @@ public class TestLeafQueue {
     // Start testing...
     
     // Only 1 container
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(1*GB, a.getUsedResources().getMemory());
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1249,7 +1282,8 @@ public class TestLeafQueue {
 
     // Also 2nd -> minCapacity = 1024 since (.1 * 8G) < minAlloc, also
     // you can get one container more than user-limit
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1257,7 +1291,8 @@ public class TestLeafQueue {
     assertEquals(2*GB, a.getMetrics().getAllocatedMB());
     
     // Now, reservation should kick in for app_1
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(6*GB, a.getUsedResources().getMemory()); 
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1273,7 +1308,8 @@ public class TestLeafQueue {
             ContainerState.COMPLETE, "",
             ContainerExitStatus.KILLED_BY_RESOURCEMANAGER),
         RMContainerEventType.KILL, null, true);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5*GB, a.getUsedResources().getMemory()); 
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1289,7 +1325,8 @@ public class TestLeafQueue {
             ContainerState.COMPLETE, "",
             ContainerExitStatus.KILLED_BY_RESOURCEMANAGER),
         RMContainerEventType.KILL, null, true);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(4*GB, a.getUsedResources().getMemory());
     assertEquals(0*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(4*GB, app_1.getCurrentConsumption().getMemory());
@@ -1356,7 +1393,8 @@ public class TestLeafQueue {
 
     // Start testing...
 
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1365,7 +1403,8 @@ public class TestLeafQueue {
     assertEquals(0*GB, a.getMetrics().getAvailableMB());
 
     // Now, reservation should kick in for app_1
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(6*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1378,7 +1417,8 @@ public class TestLeafQueue {
     // We do not need locality delay here
     doReturn(-1).when(a).getNodeLocalityDelay();
     
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(10*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(4*GB, app_1.getCurrentConsumption().getMemory());
@@ -1394,7 +1434,8 @@ public class TestLeafQueue {
             ContainerState.COMPLETE, "",
             ContainerExitStatus.KILLED_BY_RESOURCEMANAGER),
         RMContainerEventType.KILL, null, true);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8*GB, a.getUsedResources().getMemory());
     assertEquals(0*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(8*GB, app_1.getCurrentConsumption().getMemory());
@@ -1462,20 +1503,23 @@ public class TestLeafQueue {
     // Start testing...
     
     // Only 1 container
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(1*GB, a.getUsedResources().getMemory());
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
 
     // Also 2nd -> minCapacity = 1024 since (.1 * 8G) < minAlloc, also
     // you can get one container more than user-limit
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2*GB, a.getUsedResources().getMemory());
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
     
     // Now, reservation should kick in for app_1
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(6*GB, a.getUsedResources().getMemory()); 
     assertEquals(2*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1489,7 +1533,8 @@ public class TestLeafQueue {
             ContainerState.COMPLETE, "",
             ContainerExitStatus.KILLED_BY_RESOURCEMANAGER),
         RMContainerEventType.KILL, null, true);
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5*GB, a.getUsedResources().getMemory()); 
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1498,7 +1543,8 @@ public class TestLeafQueue {
     assertEquals(1, app_1.getReReservations(priority));
 
     // Re-reserve
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5*GB, a.getUsedResources().getMemory()); 
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0*GB, app_1.getCurrentConsumption().getMemory());
@@ -1507,7 +1553,8 @@ public class TestLeafQueue {
     assertEquals(2, app_1.getReReservations(priority));
     
     // Try to schedule on node_1 now, should *move* the reservation
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(9*GB, a.getUsedResources().getMemory()); 
     assertEquals(1*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(4*GB, app_1.getCurrentConsumption().getMemory());
@@ -1524,7 +1571,8 @@ public class TestLeafQueue {
             ContainerState.COMPLETE, "",
             ContainerExitStatus.KILLED_BY_RESOURCEMANAGER),
         RMContainerEventType.KILL, null, true);
-    CSAssignment assignment = a.assignContainers(clusterResource, node_0, false);
+    CSAssignment assignment = a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8*GB, a.getUsedResources().getMemory());
     assertEquals(0*GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(4*GB, app_1.getCurrentConsumption().getMemory());
@@ -1595,7 +1643,8 @@ public class TestLeafQueue {
     CSAssignment assignment = null;
     
     // Start with off switch, shouldn't allocate due to delay scheduling
-    assignment = a.assignContainers(clusterResource, node_2, false);
+    assignment = a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_2), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(1, app_0.getSchedulingOpportunities(priority));
@@ -1603,7 +1652,8 @@ public class TestLeafQueue {
     assertEquals(NodeType.NODE_LOCAL, assignment.getType()); // None->NODE_LOCAL
 
     // Another off switch, shouldn't allocate due to delay scheduling
-    assignment = a.assignContainers(clusterResource, node_2, false);
+    assignment = a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_2), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(2, app_0.getSchedulingOpportunities(priority));
@@ -1611,7 +1661,8 @@ public class TestLeafQueue {
     assertEquals(NodeType.NODE_LOCAL, assignment.getType()); // None->NODE_LOCAL
     
     // Another off switch, shouldn't allocate due to delay scheduling
-    assignment = a.assignContainers(clusterResource, node_2, false);
+    assignment = a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_2), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(3, app_0.getSchedulingOpportunities(priority));
@@ -1620,7 +1671,8 @@ public class TestLeafQueue {
     
     // Another off switch, now we should allocate 
     // since missedOpportunities=3 and reqdContainers=3
-    assignment = a.assignContainers(clusterResource, node_2, false);
+    assignment = a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.OFF_SWITCH), eq(node_2), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(4, app_0.getSchedulingOpportunities(priority)); // should NOT reset
@@ -1628,7 +1680,8 @@ public class TestLeafQueue {
     assertEquals(NodeType.OFF_SWITCH, assignment.getType());
     
     // NODE_LOCAL - node_0
-    assignment = a.assignContainers(clusterResource, node_0, false);
+    assignment = a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.NODE_LOCAL), eq(node_0), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should reset
@@ -1636,7 +1689,8 @@ public class TestLeafQueue {
     assertEquals(NodeType.NODE_LOCAL, assignment.getType());
     
     // NODE_LOCAL - node_1
-    assignment = a.assignContainers(clusterResource, node_1, false);
+    assignment = a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.NODE_LOCAL), eq(node_1), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should reset
@@ -1664,13 +1718,15 @@ public class TestLeafQueue {
     doReturn(1).when(a).getNodeLocalityDelay();
     
     // Shouldn't assign RACK_LOCAL yet
-    assignment = a.assignContainers(clusterResource, node_3, false);
+    assignment = a.assignContainers(clusterResource, node_3, false,
+        new ResourceLimits(clusterResource));
     assertEquals(1, app_0.getSchedulingOpportunities(priority));
     assertEquals(2, app_0.getTotalRequiredResources(priority));
     assertEquals(NodeType.NODE_LOCAL, assignment.getType()); // None->NODE_LOCAL
 
     // Should assign RACK_LOCAL now
-    assignment = a.assignContainers(clusterResource, node_3, false);
+    assignment = a.assignContainers(clusterResource, node_3, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.RACK_LOCAL), eq(node_3), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should reset
@@ -1751,7 +1807,8 @@ public class TestLeafQueue {
     
     // Start with off switch, shouldn't allocate P1 due to delay scheduling
     // thus, no P2 either!
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_2), 
         eq(priority_1), any(ResourceRequest.class), any(Container.class));
     assertEquals(1, app_0.getSchedulingOpportunities(priority_1));
@@ -1763,7 +1820,8 @@ public class TestLeafQueue {
 
     // Another off-switch, shouldn't allocate P1 due to delay scheduling
     // thus, no P2 either!
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_2), 
         eq(priority_1), any(ResourceRequest.class), any(Container.class));
     assertEquals(2, app_0.getSchedulingOpportunities(priority_1));
@@ -1774,7 +1832,8 @@ public class TestLeafQueue {
     assertEquals(1, app_0.getTotalRequiredResources(priority_2));
 
     // Another off-switch, shouldn't allocate OFF_SWITCH P1
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.OFF_SWITCH), eq(node_2), 
         eq(priority_1), any(ResourceRequest.class), any(Container.class));
     assertEquals(3, app_0.getSchedulingOpportunities(priority_1));
@@ -1785,7 +1844,8 @@ public class TestLeafQueue {
     assertEquals(1, app_0.getTotalRequiredResources(priority_2));
 
     // Now, DATA_LOCAL for P1
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.NODE_LOCAL), eq(node_0), 
         eq(priority_1), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority_1));
@@ -1796,7 +1856,8 @@ public class TestLeafQueue {
     assertEquals(1, app_0.getTotalRequiredResources(priority_2));
 
     // Now, OFF_SWITCH for P2
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_1), 
         eq(priority_1), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority_1));
@@ -1872,7 +1933,8 @@ public class TestLeafQueue {
     app_0.updateResourceRequests(app_0_requests_0);
     
     // NODE_LOCAL - node_0_1
-    a.assignContainers(clusterResource, node_0_0, false);
+    a.assignContainers(clusterResource, node_0_0, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.NODE_LOCAL), eq(node_0_0), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should reset
@@ -1880,7 +1942,8 @@ public class TestLeafQueue {
 
     // No allocation on node_1_0 even though it's node/rack local since
     // required(ANY) == 0
-    a.assignContainers(clusterResource, node_1_0, false);
+    a.assignContainers(clusterResource, node_1_0, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_1_0), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // Still zero
@@ -1896,14 +1959,16 @@ public class TestLeafQueue {
 
     // No allocation on node_0_1 even though it's node/rack local since
     // required(rack_1) == 0
-    a.assignContainers(clusterResource, node_0_1, false);
+    a.assignContainers(clusterResource, node_0_1, false,
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_1_0), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(1, app_0.getSchedulingOpportunities(priority)); 
     assertEquals(1, app_0.getTotalRequiredResources(priority));
     
     // NODE_LOCAL - node_1
-    a.assignContainers(clusterResource, node_1_0, false);
+    a.assignContainers(clusterResource, node_1_0, false,
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.NODE_LOCAL), eq(node_1_0), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should reset
@@ -2030,7 +2095,9 @@ public class TestLeafQueue {
     assertEquals(2, e.activeApplications.size());
     assertEquals(1, e.pendingApplications.size());
 
-    e.updateClusterResource(Resources.createResource(200 * 16 * GB, 100 * 32));
+    Resource clusterResource = Resources.createResource(200 * 16 * GB, 100 * 32); 
+    e.updateClusterResource(clusterResource,
+        new ResourceLimits(clusterResource));
 
     // after updating cluster resource
     assertEquals(3, e.activeApplications.size());
@@ -2153,7 +2220,8 @@ public class TestLeafQueue {
     
     // node_0_1  
     // Shouldn't allocate since RR(rack_0) = null && RR(ANY) = relax: false
-    a.assignContainers(clusterResource, node_0_1, false);
+    a.assignContainers(clusterResource, node_0_1, false, 
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_0_1), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should be 0
@@ -2175,7 +2243,8 @@ public class TestLeafQueue {
 
     // node_1_1  
     // Shouldn't allocate since RR(rack_1) = relax: false
-    a.assignContainers(clusterResource, node_1_1, false);
+    a.assignContainers(clusterResource, node_1_1, false, 
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_0_1), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should be 0
@@ -2205,7 +2274,8 @@ public class TestLeafQueue {
 
     // node_1_1  
     // Shouldn't allocate since node_1_1 is blacklisted
-    a.assignContainers(clusterResource, node_1_1, false);
+    a.assignContainers(clusterResource, node_1_1, false, 
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_1_1), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should be 0
@@ -2233,7 +2303,8 @@ public class TestLeafQueue {
 
     // node_1_1  
     // Shouldn't allocate since rack_1 is blacklisted
-    a.assignContainers(clusterResource, node_1_1, false);
+    a.assignContainers(clusterResource, node_1_1, false, 
+        new ResourceLimits(clusterResource));
     verify(app_0, never()).allocate(any(NodeType.class), eq(node_1_1), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); // should be 0
@@ -2259,7 +2330,8 @@ public class TestLeafQueue {
     // Blacklist: < host_0_0 >       <----
 
     // Now, should allocate since RR(rack_1) = relax: true
-    a.assignContainers(clusterResource, node_1_1, false);
+    a.assignContainers(clusterResource, node_1_1, false, 
+        new ResourceLimits(clusterResource));
     verify(app_0,never()).allocate(eq(NodeType.RACK_LOCAL), eq(node_1_1), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); 
@@ -2289,7 +2361,8 @@ public class TestLeafQueue {
     // host_1_0: 8G
     // host_1_1: 7G
 
-    a.assignContainers(clusterResource, node_1_0, false);
+    a.assignContainers(clusterResource, node_1_0, false, 
+        new ResourceLimits(clusterResource));
     verify(app_0).allocate(eq(NodeType.NODE_LOCAL), eq(node_1_0), 
         any(Priority.class), any(ResourceRequest.class), any(Container.class));
     assertEquals(0, app_0.getSchedulingOpportunities(priority)); 
@@ -2323,7 +2396,8 @@ public class TestLeafQueue {
 
     Resource newClusterResource = Resources.createResource(100 * 20 * GB,
         100 * 32);
-    a.updateClusterResource(newClusterResource);
+    a.updateClusterResource(newClusterResource, 
+        new ResourceLimits(newClusterResource));
     //  100 * 20 * 0.2 = 400
     assertEquals(a.getAMResourceLimit(), Resources.createResource(400 * GB, 1));
   }
@@ -2370,7 +2444,8 @@ public class TestLeafQueue {
             recordFactory)));
 
     try {
-      a.assignContainers(clusterResource, node_0, false);
+      a.assignContainers(clusterResource, node_0, false, 
+          new ResourceLimits(clusterResource));
     } catch (NullPointerException e) {
       Assert.fail("NPE when allocating container on node but "
           + "forget to set off-switch request should be handled");

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
index 696ad7a..4f89386 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java
@@ -36,7 +36,6 @@ import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
 
-import org.junit.Assert;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -47,12 +46,14 @@ import org.apache.hadoop.yarn.conf.YarnConfiguration;
 import org.apache.hadoop.yarn.security.YarnAuthorizationProvider;
 import org.apache.hadoop.yarn.server.resourcemanager.RMContext;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.NodeType;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.ResourceCalculator;
 import org.apache.hadoop.yarn.util.resource.Resources;
 import org.junit.After;
+import org.junit.Assert;
 import org.junit.Before;
 import org.junit.Test;
 import org.mockito.InOrder;
@@ -154,8 +155,9 @@ public class TestParentQueue {
         
         // Next call - nothing
         if (allocation > 0) {
-          doReturn(new CSAssignment(Resources.none(), type)).
-            when(queue).assignContainers(eq(clusterResource), eq(node), eq(false));
+          doReturn(new CSAssignment(Resources.none(), type)).when(queue)
+              .assignContainers(eq(clusterResource), eq(node), eq(false),
+                  any(ResourceLimits.class));
 
           // Mock the node's resource availability
           Resource available = node.getAvailableResource();
@@ -166,7 +168,8 @@ public class TestParentQueue {
         return new CSAssignment(allocatedResource, type);
       }
     }).
-    when(queue).assignContainers(eq(clusterResource), eq(node), eq(false));
+when(queue).assignContainers(eq(clusterResource), eq(node), eq(false),
+        any(ResourceLimits.class));
   }
   
   private float computeQueueAbsoluteUsedCapacity(CSQueue queue, 
@@ -229,19 +232,21 @@ public class TestParentQueue {
     // Simulate B returning a container on node_0
     stubQueueAllocation(a, clusterResource, node_0, 0*GB);
     stubQueueAllocation(b, clusterResource, node_0, 1*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     verifyQueueMetrics(a, 0*GB, clusterResource);
     verifyQueueMetrics(b, 1*GB, clusterResource);
     
     // Now, A should get the scheduling opportunity since A=0G/6G, B=1G/14G
     stubQueueAllocation(a, clusterResource, node_1, 2*GB);
     stubQueueAllocation(b, clusterResource, node_1, 1*GB);
-    root.assignContainers(clusterResource, node_1, false);
+    root.assignContainers(clusterResource, node_1, false, 
+        new ResourceLimits(clusterResource));
     InOrder allocationOrder = inOrder(a, b);
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 2*GB, clusterResource);
     verifyQueueMetrics(b, 2*GB, clusterResource);
 
@@ -249,12 +254,13 @@ public class TestParentQueue {
     // since A has 2/6G while B has 2/14G
     stubQueueAllocation(a, clusterResource, node_0, 1*GB);
     stubQueueAllocation(b, clusterResource, node_0, 2*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     allocationOrder = inOrder(b, a);
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 3*GB, clusterResource);
     verifyQueueMetrics(b, 4*GB, clusterResource);
 
@@ -262,12 +268,13 @@ public class TestParentQueue {
     // since A has 3/6G while B has 4/14G
     stubQueueAllocation(a, clusterResource, node_0, 0*GB);
     stubQueueAllocation(b, clusterResource, node_0, 4*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     allocationOrder = inOrder(b, a);
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 3*GB, clusterResource);
     verifyQueueMetrics(b, 8*GB, clusterResource);
 
@@ -275,12 +282,13 @@ public class TestParentQueue {
     // since A has 3/6G while B has 8/14G
     stubQueueAllocation(a, clusterResource, node_1, 1*GB);
     stubQueueAllocation(b, clusterResource, node_1, 1*GB);
-    root.assignContainers(clusterResource, node_1, false);
+    root.assignContainers(clusterResource, node_1, false, 
+        new ResourceLimits(clusterResource));
     allocationOrder = inOrder(a, b);
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 4*GB, clusterResource);
     verifyQueueMetrics(b, 9*GB, clusterResource);
   }
@@ -441,7 +449,8 @@ public class TestParentQueue {
     stubQueueAllocation(b, clusterResource, node_0, 0*GB);
     stubQueueAllocation(c, clusterResource, node_0, 1*GB);
     stubQueueAllocation(d, clusterResource, node_0, 0*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     verifyQueueMetrics(a, 0*GB, clusterResource);
     verifyQueueMetrics(b, 0*GB, clusterResource);
     verifyQueueMetrics(c, 1*GB, clusterResource);
@@ -453,7 +462,8 @@ public class TestParentQueue {
     stubQueueAllocation(a, clusterResource, node_1, 0*GB);
     stubQueueAllocation(b2, clusterResource, node_1, 4*GB);
     stubQueueAllocation(c, clusterResource, node_1, 0*GB);
-    root.assignContainers(clusterResource, node_1, false);
+    root.assignContainers(clusterResource, node_1, false, 
+        new ResourceLimits(clusterResource));
     verifyQueueMetrics(a, 0*GB, clusterResource);
     verifyQueueMetrics(b, 4*GB, clusterResource);
     verifyQueueMetrics(c, 1*GB, clusterResource);
@@ -464,14 +474,15 @@ public class TestParentQueue {
     stubQueueAllocation(a1, clusterResource, node_0, 1*GB);
     stubQueueAllocation(b3, clusterResource, node_0, 2*GB);
     stubQueueAllocation(c, clusterResource, node_0, 2*GB);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     InOrder allocationOrder = inOrder(a, c, b);
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(c).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 1*GB, clusterResource);
     verifyQueueMetrics(b, 6*GB, clusterResource);
     verifyQueueMetrics(c, 3*GB, clusterResource);
@@ -490,16 +501,17 @@ public class TestParentQueue {
     stubQueueAllocation(b3, clusterResource, node_2, 1*GB);
     stubQueueAllocation(b1, clusterResource, node_2, 1*GB);
     stubQueueAllocation(c, clusterResource, node_2, 1*GB);
-    root.assignContainers(clusterResource, node_2, false);
+    root.assignContainers(clusterResource, node_2, false, 
+        new ResourceLimits(clusterResource));
     allocationOrder = inOrder(a, a2, a1, b, c);
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(a2).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(c).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 3*GB, clusterResource);
     verifyQueueMetrics(b, 8*GB, clusterResource);
     verifyQueueMetrics(c, 4*GB, clusterResource);
@@ -599,7 +611,8 @@ public class TestParentQueue {
     // Simulate B returning a container on node_0
     stubQueueAllocation(a, clusterResource, node_0, 0*GB, NodeType.OFF_SWITCH);
     stubQueueAllocation(b, clusterResource, node_0, 1*GB, NodeType.OFF_SWITCH);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     verifyQueueMetrics(a, 0*GB, clusterResource);
     verifyQueueMetrics(b, 1*GB, clusterResource);
     
@@ -607,12 +620,13 @@ public class TestParentQueue {
     // also, B gets a scheduling opportunity since A allocates RACK_LOCAL
     stubQueueAllocation(a, clusterResource, node_1, 2*GB, NodeType.RACK_LOCAL);
     stubQueueAllocation(b, clusterResource, node_1, 1*GB, NodeType.OFF_SWITCH);
-    root.assignContainers(clusterResource, node_1, false);
+    root.assignContainers(clusterResource, node_1, false, 
+        new ResourceLimits(clusterResource));
     InOrder allocationOrder = inOrder(a, b);
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 2*GB, clusterResource);
     verifyQueueMetrics(b, 2*GB, clusterResource);
     
@@ -621,12 +635,13 @@ public class TestParentQueue {
     // However, since B returns off-switch, A won't get an opportunity
     stubQueueAllocation(a, clusterResource, node_0, 1*GB, NodeType.NODE_LOCAL);
     stubQueueAllocation(b, clusterResource, node_0, 2*GB, NodeType.OFF_SWITCH);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     allocationOrder = inOrder(b, a);
     allocationOrder.verify(b).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(a).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(a, 2*GB, clusterResource);
     verifyQueueMetrics(b, 4*GB, clusterResource);
 
@@ -665,7 +680,8 @@ public class TestParentQueue {
     // Simulate B3 returning a container on node_0
     stubQueueAllocation(b2, clusterResource, node_0, 0*GB, NodeType.OFF_SWITCH);
     stubQueueAllocation(b3, clusterResource, node_0, 1*GB, NodeType.OFF_SWITCH);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     verifyQueueMetrics(b2, 0*GB, clusterResource);
     verifyQueueMetrics(b3, 1*GB, clusterResource);
     
@@ -673,12 +689,13 @@ public class TestParentQueue {
     // also, B3 gets a scheduling opportunity since B2 allocates RACK_LOCAL
     stubQueueAllocation(b2, clusterResource, node_1, 1*GB, NodeType.RACK_LOCAL);
     stubQueueAllocation(b3, clusterResource, node_1, 1*GB, NodeType.OFF_SWITCH);
-    root.assignContainers(clusterResource, node_1, false);
+    root.assignContainers(clusterResource, node_1, false, 
+        new ResourceLimits(clusterResource));
     InOrder allocationOrder = inOrder(b2, b3);
     allocationOrder.verify(b2).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(b3).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(b2, 1*GB, clusterResource);
     verifyQueueMetrics(b3, 2*GB, clusterResource);
     
@@ -687,12 +704,13 @@ public class TestParentQueue {
     // However, since B3 returns off-switch, B2 won't get an opportunity
     stubQueueAllocation(b2, clusterResource, node_0, 1*GB, NodeType.NODE_LOCAL);
     stubQueueAllocation(b3, clusterResource, node_0, 1*GB, NodeType.OFF_SWITCH);
-    root.assignContainers(clusterResource, node_0, false);
+    root.assignContainers(clusterResource, node_0, false, 
+        new ResourceLimits(clusterResource));
     allocationOrder = inOrder(b3, b2);
     allocationOrder.verify(b3).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     allocationOrder.verify(b2).assignContainers(eq(clusterResource), 
-        any(FiCaSchedulerNode.class), anyBoolean());
+        any(FiCaSchedulerNode.class), anyBoolean(), anyResourceLimits());
     verifyQueueMetrics(b2, 1*GB, clusterResource);
     verifyQueueMetrics(b3, 3*GB, clusterResource);
 
@@ -774,4 +792,8 @@ public class TestParentQueue {
   @After
   public void tearDown() throws Exception {
   }
+  
+  private ResourceLimits anyResourceLimits() {
+    return any(ResourceLimits.class);
+  }
 }

http://git-wip-us.apache.org/repos/asf/hadoop/blob/14dd647c/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
index 985609e..4c6b25f 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestReservations.java
@@ -57,6 +57,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainer;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerEventType;
 import org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.RMContainerImpl;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ActiveUsersManager;
+import org.apache.hadoop.yarn.server.resourcemanager.scheduler.ResourceLimits;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp;
 import org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode;
 import org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager;
@@ -262,7 +263,8 @@ public class TestReservations {
 
     // Start testing...
     // Only AM
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2 * GB, a.getUsedResources().getMemory());
     assertEquals(2 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -273,7 +275,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // Only 1 map - simulating reduce
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5 * GB, a.getUsedResources().getMemory());
     assertEquals(5 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -284,7 +287,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // Only 1 map to other node - simulating reduce
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -298,7 +302,8 @@ public class TestReservations {
     assertEquals(2, app_0.getTotalRequiredResources(priorityReduce));
 
     // try to assign reducer (5G on node 0 and should reserve)
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -313,7 +318,8 @@ public class TestReservations {
     assertEquals(2, app_0.getTotalRequiredResources(priorityReduce));
 
     // assign reducer to node 2
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     assertEquals(18 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -329,7 +335,8 @@ public class TestReservations {
 
     // node_1 heartbeat and unreserves from node_0 in order to allocate
     // on node_1
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(18 * GB, a.getUsedResources().getMemory());
     assertEquals(18 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -411,7 +418,8 @@ public class TestReservations {
 
     // Start testing...
     // Only AM
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2 * GB, a.getUsedResources().getMemory());
     assertEquals(2 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -422,7 +430,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // Only 1 map - simulating reduce
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5 * GB, a.getUsedResources().getMemory());
     assertEquals(5 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -433,7 +442,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // Only 1 map to other node - simulating reduce
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -447,7 +457,8 @@ public class TestReservations {
     assertEquals(2, app_0.getTotalRequiredResources(priorityReduce));
 
     // try to assign reducer (5G on node 0 and should reserve)
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -462,7 +473,8 @@ public class TestReservations {
     assertEquals(2, app_0.getTotalRequiredResources(priorityReduce));
 
     // assign reducer to node 2
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     assertEquals(18 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -478,7 +490,8 @@ public class TestReservations {
 
     // node_1 heartbeat and won't unreserve from node_0, potentially stuck
     // if AM doesn't handle
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(18 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -552,7 +565,8 @@ public class TestReservations {
 
     // Start testing...
     // Only AM
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2 * GB, a.getUsedResources().getMemory());
     assertEquals(2 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -562,7 +576,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_1.getUsedResource().getMemory());
 
     // Only 1 map - simulating reduce
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5 * GB, a.getUsedResources().getMemory());
     assertEquals(5 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -572,7 +587,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_1.getUsedResource().getMemory());
 
     // Only 1 map to other node - simulating reduce
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -585,7 +601,8 @@ public class TestReservations {
     assertEquals(2, app_0.getTotalRequiredResources(priorityReduce));
 
     // try to assign reducer (5G on node 0 and should reserve)
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -599,7 +616,8 @@ public class TestReservations {
     assertEquals(2, app_0.getTotalRequiredResources(priorityReduce));
 
     // could allocate but told need to unreserve first
-    a.assignContainers(clusterResource, node_1, true);
+    a.assignContainers(clusterResource, node_1, true,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -792,7 +810,8 @@ public class TestReservations {
 
     // Start testing...
     // Only AM
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2 * GB, a.getUsedResources().getMemory());
     assertEquals(2 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -802,7 +821,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_1.getUsedResource().getMemory());
 
     // Only 1 map - simulating reduce
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5 * GB, a.getUsedResources().getMemory());
     assertEquals(5 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -812,7 +832,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_1.getUsedResource().getMemory());
 
     // Only 1 map to other node - simulating reduce
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -833,7 +854,8 @@ public class TestReservations {
     // now add in reservations and make sure it continues if config set
     // allocate to queue so that the potential new capacity is greater then
     // absoluteMaxCapacity
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, a.getMetrics().getReservedMB());
@@ -966,7 +988,8 @@ public class TestReservations {
 
     // Start testing...
     // Only AM
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2 * GB, a.getUsedResources().getMemory());
     assertEquals(2 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -976,7 +999,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_1.getUsedResource().getMemory());
 
     // Only 1 map - simulating reduce
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5 * GB, a.getUsedResources().getMemory());
     assertEquals(5 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -986,7 +1010,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_1.getUsedResource().getMemory());
 
     // Only 1 map to other node - simulating reduce
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -999,7 +1024,8 @@ public class TestReservations {
     // now add in reservations and make sure it continues if config set
     // allocate to queue so that the potential new capacity is greater then
     // absoluteMaxCapacity
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(5 * GB, app_0.getCurrentReservation().getMemory());
@@ -1096,7 +1122,8 @@ public class TestReservations {
 
     // Start testing...
     // Only AM
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(2 * GB, a.getUsedResources().getMemory());
     assertEquals(2 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -1107,7 +1134,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // Only 1 map - simulating reduce
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(5 * GB, a.getUsedResources().getMemory());
     assertEquals(5 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -1118,7 +1146,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // Only 1 map to other node - simulating reduce
-    a.assignContainers(clusterResource, node_1, false);
+    a.assignContainers(clusterResource, node_1, false,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -1132,7 +1161,8 @@ public class TestReservations {
     // try to assign reducer (5G on node 0), but tell it
     // it has to unreserve. No room to allocate and shouldn't reserve
     // since nothing currently reserved.
-    a.assignContainers(clusterResource, node_0, true);
+    a.assignContainers(clusterResource, node_0, true,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -1146,7 +1176,8 @@ public class TestReservations {
     // try to assign reducer (5G on node 2), but tell it
     // it has to unreserve. Has room but shouldn't reserve
     // since nothing currently reserved.
-    a.assignContainers(clusterResource, node_2, true);
+    a.assignContainers(clusterResource, node_2, true,
+        new ResourceLimits(clusterResource));
     assertEquals(8 * GB, a.getUsedResources().getMemory());
     assertEquals(8 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -1158,7 +1189,8 @@ public class TestReservations {
     assertEquals(0 * GB, node_2.getUsedResource().getMemory());
 
     // let it assign 5G to node_2
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     assertEquals(13 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(0 * GB, a.getMetrics().getReservedMB());
@@ -1170,7 +1202,8 @@ public class TestReservations {
     assertEquals(5 * GB, node_2.getUsedResource().getMemory());
 
     // reserve 8G node_0
-    a.assignContainers(clusterResource, node_0, false);
+    a.assignContainers(clusterResource, node_0, false,
+        new ResourceLimits(clusterResource));
     assertEquals(21 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(8 * GB, a.getMetrics().getReservedMB());
@@ -1184,7 +1217,8 @@ public class TestReservations {
     // try to assign (8G on node 2). No room to allocate,
     // continued to try due to having reservation above,
     // but hits queue limits so can't reserve anymore.
-    a.assignContainers(clusterResource, node_2, false);
+    a.assignContainers(clusterResource, node_2, false,
+        new ResourceLimits(clusterResource));
     assertEquals(21 * GB, a.getUsedResources().getMemory());
     assertEquals(13 * GB, app_0.getCurrentConsumption().getMemory());
     assertEquals(8 * GB, a.getMetrics().getReservedMB());


[14/43] hadoop git commit: move HADOOP-10976 to 2.7

Posted by zj...@apache.org.
move HADOOP-10976 to 2.7


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/915bec3e
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/915bec3e
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/915bec3e

Branch: refs/heads/YARN-2928
Commit: 915bec3e84f4da913dd7b7ad0f389eb69fc064c6
Parents: 8472d72
Author: Akira Ajisaka <aa...@apache.org>
Authored: Sat Feb 28 17:15:13 2015 -0800
Committer: Akira Ajisaka <aa...@apache.org>
Committed: Sat Feb 28 17:15:13 2015 -0800

----------------------------------------------------------------------
 hadoop-common-project/hadoop-common/CHANGES.txt | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/915bec3e/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index 6d4da77..74bf558 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -163,9 +163,6 @@ Trunk (Unreleased)
     HADOOP-11346. Rewrite sls/rumen to use new shell framework (John Smith
     via aw)
 
-    HADOOP-10976. moving the source code of hadoop-tools docs to the
-    directory under hadoop-tools (Masatake Iwasaki via aw)
-
     HADOOP-7713. dfs -count -q should label output column (Jonathan Allen
     via aw)
 
@@ -636,6 +633,9 @@ Release 2.7.0 - UNRELEASED
     HADOOP-11632. Cleanup Find.java to remove SupressWarnings annotations.
     (Akira Ajisaka via ozawa)
 
+    HADOOP-10976. moving the source code of hadoop-tools docs to the
+    directory under hadoop-tools (Masatake Iwasaki via aw)
+
   OPTIMIZATIONS
 
     HADOOP-11323. WritableComparator#compare keeps reference to byte array.