You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-commits@hadoop.apache.org by ma...@apache.org on 2011/11/14 23:02:02 UTC

svn commit: r1201927 - in /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project: CHANGES.txt hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm

Author: mahadev
Date: Mon Nov 14 22:02:02 2011
New Revision: 1201927

URL: http://svn.apache.org/viewvc?rev=1201927&view=rev
Log:
MAPREDUCE-3325. Improvements to CapacityScheduler doc. (Thomas Graves via mahadev) - Merging r1201925 from trunk

Modified:
    hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
    hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm

Modified: hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt?rev=1201927&r1=1201926&r2=1201927&view=diff
==============================================================================
--- hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt (original)
+++ hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt Mon Nov 14 22:02:02 2011
@@ -24,6 +24,9 @@ Release 0.23.1 - Unreleased
     MAPREDUCE-3370. Fixed MiniMRYarnCluster and related tests to not use
     a hard-coded path for the mr-app jar. (Ahmed Radwan via vinodkv)
 
+    MAPREDUCE-3325. Improvements to CapacityScheduler doc. (Thomas Graves 
+    via mahadev)
+
   OPTIMIZATIONS
 
   BUG FIXES

Modified: hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm?rev=1201927&r1=1201926&r2=1201927&view=diff
==============================================================================
--- hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm (original)
+++ hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/CapacityScheduler.apt.vm Mon Nov 14 22:02:02 2011
@@ -50,7 +50,7 @@ Hadoop MapReduce Next Generation - Capac
   that the available resources in the Hadoop cluster are shared among multiple 
   organizations who collectively fund the cluster based on their computing 
   needs. There is an added benefit that an organization can access 
-  any excess capacity no being used by others. This provides elasticity for 
+  any excess capacity not being used by others. This provides elasticity for 
   the organizations in a cost-effective manner.
    
   Sharing clusters across organizations necessitates strong support for
@@ -58,7 +58,7 @@ Hadoop MapReduce Next Generation - Capac
   safe-guards to ensure the shared cluster is impervious to single rouge 
   application or user or sets thereof. The <<<CapacityScheduler>>> provides a 
   stringent set of limits to ensure that a single application or user or queue 
-  cannot consume dispropotionate amount of resources in the cluster. Also, the 
+  cannot consume disproportionate amount of resources in the cluster. Also, the 
   <<<CapacityScheduler>>> provides limits on initialized/pending applications 
   from a single user and queue to ensure fairness and stability of the cluster.
    
@@ -67,7 +67,7 @@ Hadoop MapReduce Next Generation - Capac
   economics of the shared cluster. 
   
   To provide further control and predictability on sharing of resources, the 
-  <<<CapacityScheduler>>> supports <heirarchical queues> to ensure 
+  <<<CapacityScheduler>>> supports <hierarchical queues> to ensure 
   resources are shared among the sub-queues of an organization before other 
   queues are allowed to use free resources, there-by providing <affinity> 
   for sharing free resources among applications of a given organization.
@@ -76,7 +76,7 @@ Hadoop MapReduce Next Generation - Capac
 
   The <<<CapacityScheduler>>> supports the following features:
   
-  * Heirarchical Queues - Heirarchy of queues is supported to ensure resources 
+  * Hierarchical Queues - Hierarchy of queues is supported to ensure resources 
     are shared among the sub-queues of an organization before other 
     queues are allowed to use free resources, there-by providing more control
     and predictability.
@@ -96,12 +96,12 @@ Hadoop MapReduce Next Generation - Capac
     capacity. When there is demand for these resources from queues running below 
     capacity at a future point in time, as tasks scheduled on these resources 
     complete, they will be assigned to applications on queues running below the
-    capacity. This ensures that resources are available in a predictable and 
-    elastic manner to queues, thus preventing artifical silos of resources in 
-    the cluster which helps utilization.
+    capacity (pre-emption is not supported). This ensures that resources are available 
+    in a predictable and elastic manner to queues, thus preventing artifical silos 
+    of resources in the cluster which helps utilization.
     
   * Multi-tenancy - Comprehensive set of limits are provided to prevent a 
-    single application, user and queue from monpolizing resources of the queue 
+    single application, user and queue from monopolizing resources of the queue 
     or the cluster as a whole to ensure that the cluster isn't overwhelmed.
     
   * Operability
@@ -110,8 +110,8 @@ Hadoop MapReduce Next Generation - Capac
       capacity, ACLs can be changed, at runtime, by administrators in a secure 
       manner to minimize disruption to users. Also, a console is provided for 
       users and administrators to view current allocation of resources to 
-      various queues in the system. Administrators can also 
-      <add additional queues> at runtime.
+      various queues in the system. Administrators can <add additional queues> 
+      at runtime, but queues cannot be <deleted> at runtime.
       
     * Drain applications - Administrators can <stop> queues
       at runtime to ensure that while existing applications run to completion,
@@ -139,7 +139,7 @@ Hadoop MapReduce Next Generation - Capac
 || Property                            || Value                                |
 *--------------------------------------+--------------------------------------+
 | <<<yarn.resourcemanager.scheduler.class>>> | |
-| | <<<org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.CapacityScheduler>>> |
+| | <<<org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler>>> |
 *--------------------------------------+--------------------------------------+
 
   * Setting up <queues>
@@ -155,13 +155,14 @@ Hadoop MapReduce Next Generation - Capac
     child queues.
     
     The configuration for <<<CapacityScheduler>>> uses a concept called
-    <queue path> to configure the heirarchy of queues. The <queue path> is the
-    full path of the queue's heirarcy, starting at <root>, with . (dot) as the 
+    <queue path> to configure the hierarchy of queues. The <queue path> is the
+    full path of the queue's hierarchy, starting at <root>, with . (dot) as the 
     delimiter.
     
     A given queue's children can be defined with the configuration knob:
-    <<<yarn.scheduler.capacity.<queue-path>.queues>>>
-     
+    <<<yarn.scheduler.capacity.<queue-path>.queues>>>. Children do not 
+    inherit properties directly from the parent.
+
     Here is an example with three top-level child-queues <<<a>>>, <<<b>>> and 
     <<<c>>> and some sub-queues for <<<a>>> and <<<b>>>:
      
@@ -197,52 +198,59 @@ Hadoop MapReduce Next Generation - Capac
 *--------------------------------------+--------------------------------------+
 | <<<yarn.scheduler.capacity.<queue-path>.capacity>>> | |
 | | Queue <capacity> in percentage (%). | 
-| | The sum of capacities for all queues, at each level, should be less than | 
-| | or equal to 100. | 
+| | The sum of capacities for all queues, at each level, must be equal |
+| | to 100. | 
 | | Applications in the queue may consume more resources than the queue's | 
 | | capacity if there are free resources, providing elasticity. |
 *--------------------------------------+--------------------------------------+
 | <<<yarn.scheduler.capacity.<queue-path>.maximum-capacity>>> |   | 
 | | Maximum queue capacity in percentage (%). |
 | | This limits the <elasticity> for applications in the queue. |
+| | Defaults to -1 which disables it. |
 *--------------------------------------+--------------------------------------+
 | <<<yarn.scheduler.capacity.<queue-path>.minimum-user-limit-percent>>> |   | 
 | | Each queue enforces a limit on the percentage of resources allocated to a | 
 | | user at any given time, if there is demand for resources. The user limit | 
-| | can vary between a minimum and maximum value. The former depends on the | 
-| | number of users who have submitted applications, and the latter is set to | 
-| | this property value. For e.g., suppose the value of this property is 25. | 
+| | can vary between a minimum and maximum value. The the former |
+| | (the minimum value) is set to this property value and the latter |
+| | (the maximum value) depends on the number of users who have submitted |
+| | applications. For e.g., suppose the value of this property is 25. | 
 | | If two users have submitted applications to a queue, no single user can |
 | | use more than 50% of the queue resources. If a third user submits an | 
 | | application, no single user can use more than 33% of the queue resources. |
 | | With 4 or more users, no user can use more than 25% of the queues |
-| | resources. A value of 100 implies no user limits are imposed. |
+| | resources. A value of 100 implies no user limits are imposed. The default |
+| | is 100.|
 *--------------------------------------+--------------------------------------+
 | <<<yarn.scheduler.capacity.<queue-path>.user-limit-factor>>> |   | 
 | | The multiple of the queue capacity which can be configured to allow a | 
 | | single user to acquire more resources. By default this is set to 1 which | 
 | | ensures that a single user can never take more than the queue's configured | 
-| | capacity irrespective of how idle th cluster is. |
+| | capacity irrespective of how idle th cluster is. Value is specified as |
+| | a float.|
 *--------------------------------------+--------------------------------------+
 
     * Running and Pending Application Limits
     
     
     The <<<CapacityScheduler>>> supports the following parameters to control 
-    the runnign and pending applications:
+    the running and pending applications:
     
 
 *--------------------------------------+--------------------------------------+
 || Property                            || Description                         |
 *--------------------------------------+--------------------------------------+
 | <<<yarn.scheduler.capacity.maximum-applications>>> | |
-| | Maximum number of jobs in the system which can be concurently active |
-| | both running and pending. Limits on each queue are directly proportional |
-| | to their queue capacities. |
+| | Maximum number of applications in the system which can be concurrently |
+| | active both running and pending. Limits on each queue are directly |
+| | proportional to their queue capacities and user limits. This is a 
+| | hard limit and any applications submitted when this limit is reached will |
+| | be rejected. Default is 10000.|
 *--------------------------------------+--------------------------------------+
 | yarn.scheduler.capacity.maximum-am-resource-percent | |
 | | Maximum percent of resources in the cluster which can be used to run |
 | | application masters - controls number of concurrent running applications. |
+| | Specified as a float - ie 0.5 = 50%. Default is 10%. |
 *--------------------------------------+--------------------------------------+
 
     * Queue Administration & Permissions
@@ -257,7 +265,7 @@ Hadoop MapReduce Next Generation - Capac
 | <<<yarn.scheduler.capacity.<queue-path>.state>>> | |
 | | The <state> of the queue. Can be one of <<<RUNNING>>> or <<<STOPPED>>>. |
 | | If a queue is in <<<STOPPED>>> state, new applications cannot be |
-| | submitted to <itself> or <any of its child queueus>. | 
+| | submitted to <itself> or <any of its child queues>. | 
 | | Thus, if the <root> queue is <<<STOPPED>>> no applications can be | 
 | | submitted to the entire cluster. |
 | | Existing applications continue to completion, thus the queue can be 
@@ -276,7 +284,7 @@ Hadoop MapReduce Next Generation - Capac
     
     <Note:> An <ACL> is of the form <user1>, <user2><space><group1>, <group2>.
     The special value of <<*>> implies <anyone>. The special value of <space>
-    implies <no one>.
+    implies <no one>. The default is <<*>> if not specified.
      
     * Reviewing the configuration of the CapacityScheduler
 
@@ -295,14 +303,14 @@ Hadoop MapReduce Next Generation - Capac
 * {Changing Queue Configuration}
 
   Changing queue properties and adding new queues is very simple. You need to
-  edit <<conf/capacity-scheduler.xml>> and run <rmadmin -refreshQueues>.
+  edit <<conf/capacity-scheduler.xml>> and run <yarn rmadmin -refreshQueues>.
   
 ----
 $ vi $HADOOP_CONF_DIR/capacity-scheduler.xml
-$ $YARN_HOME/bin/rmadmin -refreshQueues
+$ $YARN_HOME/bin/yarn rmadmin -refreshQueues
 ----  
 
   <Note:> Queues cannot be <deleted>, only addition of new queues is supported -
   the updated queue configuration should be a valid one i.e. queue-capacity at
   each <level> should be equal to 100%.
-  
\ No newline at end of file
+