You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by wa...@apache.org on 2018/02/27 21:19:34 UTC

[1/2] hadoop git commit: YARN-7893. Document the FPGA isolation feature. (Zhankun Tang via wangda)

Repository: hadoop
Updated Branches:
  refs/heads/trunk ac42dfc88 -> eea0b069e


YARN-7893. Document the FPGA isolation feature. (Zhankun Tang via wangda)

Change-Id: I8723320377a4bd0a7db168670d183ef5543caa67
(cherry picked from commit c2af2f16718bd1d38ceb05e354f34d7bef89127d)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/5bbbbb6a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/5bbbbb6a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/5bbbbb6a

Branch: refs/heads/trunk
Commit: 5bbbbb6a1e92ca05489e179b38a9a04fe8429439
Parents: ac42dfc
Author: Wangda Tan <wa...@apache.org>
Authored: Tue Feb 27 11:14:40 2018 -0800
Committer: Wangda Tan <wa...@apache.org>
Committed: Tue Feb 27 13:19:16 2018 -0800

----------------------------------------------------------------------
 .../src/site/markdown/UsingFPGA.md              | 176 +++++++++++++++++++
 1 file changed, 176 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/5bbbbb6a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/UsingFPGA.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/UsingFPGA.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/UsingFPGA.md
new file mode 100644
index 0000000..d98bb15
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/UsingFPGA.md
@@ -0,0 +1,176 @@
+
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+
+# Using FPGA On YARN
+# Prerequisites
+
+- The FPGA resource is supported by YARN but only shipped with "IntelFpgaOpenclPlugin" for now
+- YARN node managers have to be pre-installed with vendor drivers and configured with needed environment variables
+- Docker support is not supported yet
+
+# Configs
+
+## FPGA scheduling
+
+In `resource-types.xml`
+
+Add following properties
+
+```
+<configuration>
+  <property>
+     <name>yarn.resource-types</name>
+     <value>yarn.io/fpga</value>
+  </property>
+</configuration>
+```
+
+For `Capacity Scheduler`, `DominantResourceCalculator` MUST be configured to enable FPGA scheduling/isolation.
+Use following property to configure `DominantResourceCalculator` (In `capacity-scheduler.xml`):
+
+| Property | Default value |
+| --- | --- |
+| yarn.scheduler.capacity.resource-calculator | org.apache.hadoop.yarn.util.resource.DominantResourceCalculator |
+
+
+## FPGA Isolation
+
+### In `yarn-site.xml`
+
+```
+  <property>
+    <name>yarn.nodemanager.resource-plugins</name>
+    <value>yarn-io/fpga</value>
+  </property>
+
+```
+
+This is to enable FPGA isolation module on NodeManager side.
+
+By default, YARN will automatically detect and config FPGAs when above config is set. Following configs need to be set in `yarn-site.xml` only if admin has specialized requirements.
+
+**1) Allowed FPGA Devices**
+
+| Property | Default value |
+| --- | --- |
+| yarn.nodemanager.resource-plugins.fpga.allowed-fpga-devices | auto |
+
+  Specify FPGA devices which can be managed by YARN NodeManager, split by comma
+  Number of FPGA devices will be reported to RM to make scheduling decisions.
+  Set to auto (default) let YARN automatically discover FPGA resource from system.
+
+  Manually specify FPGA devices if admin only want subset of FPGA devices managed by YARN.
+  At present, since we can only configure one major number in c-e.cfg, FPGA device is
+  identified by their minor device number. For Intel devices, a common approach to get minor
+  device number of FPGA is using "aocl diagnose" and check uevent with device name.
+
+
+**2) Executable to discover FPGAs**
+
+| Property | Default value |
+| --- | --- |
+| yarn.nodemanager.resource-plugins.fpga.path-to-discovery-executables | |
+
+  When yarn.nodemanager.resource.fpga.allowed-fpga-devices=auto specified, YARN NodeManager needs to run FPGA discovery binary (now only support IntelFpgaOpenclPlugin) to get FPGA information.
+  When value is empty (default), YARN NodeManager will try to locate discovery executable from vendor plugin's preference. For instance, the "IntelFpgaOpenclPlugin" will try to find "aocl" in directory got from environment "ALTERAOCLSDKROOT"
+
+**3) FPGA plugin to use**
+
+| Property | Default value |
+| --- | --- |
+| yarn.nodemanager.resource-plugins.fpga.vendor-plugin.class | org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.fpga.IntelFpgaOpenclPlugin |
+
+  For now, only Intel OpenCL SDK for FPGA is supported. The IP program(.aocx file) running on FPGA should be written with OpenCL based on Intel platform.
+
+**4) CGroups mount**
+FPGA isolation uses CGroup [devices controller](https://www.kernel.org/doc/Documentation/cgroup-v1/devices.txt) to do per-FPGA device isolation. Following configs should be added to `yarn-site.xml` to automatically mount CGroup sub devices, otherwise admin has to manually create devices subfolder in order to use this feature.
+
+| Property | Default value |
+| --- | --- |
+| yarn.nodemanager.linux-container-executor.cgroups.mount | true |
+
+For more details of YARN CGroups configurations, please refer to [Using CGroups with YARN](https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html)
+
+### In `container-executor.cfg`
+
+In general, following config needs to be added to `container-executor.cfg`. The fpag.major-device-number and allowed-device-minor-numbers are optional allowed devices.
+
+```
+[fpga]
+module.enabled=true
+fpga.major-device-number=## Major device number of FPGA, by default is 246. Strongly recommend setting this
+fpga.allowed-device-minor-numbers=## Comma separated allowed minor device numbers, empty means all FPGA devices managed by YARN.
+```
+
+When user needs to run FPGA applications under non-Docker environment:
+
+```
+[cgroups]
+# Root of system cgroups (Cannot be empty or "/")
+root=/cgroup
+# Parent folder of YARN's CGroups
+yarn-hierarchy=yarn
+```
+
+
+# Use it
+
+## Distributed-shell + FPGA
+
+Distributed shell currently support specify additional resource types other than memory and vcores
+
+Run distributed shell without using docker container (the .bashrc contains some SDK related environment variables):
+
+```
+yarn jar <path/to/hadoop-yarn-applications-distributedshell.jar> \
+  -jar <path/to/hadoop-yarn-applications-distributedshell.jar> \
+  -shell_command "source /home/yarn/.bashrc && aocl diagnose" \
+  -container_resources memory-mb=2048,vcores=2,yarn.io/fpga=1 \
+  -num_containers 1
+```
+
+You should be able to see output like
+
+```
+aocl diagnose: Running diagnose from /home/fpga/intelFPGA_pro/17.0/hld/board/nalla_pcie/linux64/libexec
+
+------------------------- acl0 -------------------------
+Vendor: Nallatech ltd
+
+Phys Dev Name  Status   Information
+
+aclnalla_pcie0Passed   nalla_pcie (aclnalla_pcie0)
+                       PCIe dev_id = 2494, bus:slot.func = 02:00.00, Gen3 x8
+                       FPGA temperature = 54.4 degrees C.
+                       Total Card Power Usage = 32.4 Watts.
+                       Device Power Usage = 0.0 Watts.
+
+DIAGNOSTIC_PASSED
+---------------------------------------------------------
+
+```
+
+**Specify IP that YARN should configure before launch container**
+
+  For FPGA resource, the container can have an environment variable "REQUESTED_FPGA_IP_ID" to make YARN download and flash an IP for it before launch.
+  For instance, REQUESTED_FPGA_IP_ID="matrix_mul" will lead to a searching in container's local directory for IP file(".aocx" file) whose name contains "matirx_mul" (the application should distribute it first).
+  We only support flashing one IP for all devices for now. If user don't set this environment variable, we assume that user's application can find the IP file by itself.
+  Note that the IP downloading and reprogramming in advance in YARN is not necessary because
+  the OpenCL application may find the IP file and reprogram device on the fly. But YARN do this
+  for the containers will achieve the quickest re-programming path.
+
+
+


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[2/2] hadoop git commit: YARN-7959. Add .vm extension to PlacementConstraints.md to ensure proper filtering. (Weiwei Yang via wangda)

Posted by wa...@apache.org.
YARN-7959. Add .vm extension to PlacementConstraints.md to ensure proper filtering. (Weiwei Yang via wangda)

Change-Id: I34ec516994a5de7accaa5ef7044e443b340bd294
(cherry picked from commit 0d516b30769471d9114214b3a518e2146ff2f25e)


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/eea0b069
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/eea0b069
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/eea0b069

Branch: refs/heads/trunk
Commit: eea0b069e73713eb83ea138cc84f497bdfed6bdb
Parents: 5bbbbb6
Author: Wangda Tan <wa...@apache.org>
Authored: Tue Feb 27 11:15:17 2018 -0800
Committer: Wangda Tan <wa...@apache.org>
Committed: Tue Feb 27 13:19:24 2018 -0800

----------------------------------------------------------------------
 .../src/site/markdown/PlacementConstraints.md   | 136 -------------------
 .../site/markdown/PlacementConstraints.md.vm    | 136 +++++++++++++++++++
 2 files changed, 136 insertions(+), 136 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/eea0b069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
deleted file mode 100644
index 6af62e7..0000000
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md
+++ /dev/null
@@ -1,136 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-Placement Constraints
-=====================
-
-<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
-
-
-Overview
---------
-
-YARN allows applications to specify placement constraints in the form of data locality (preference to specific nodes or racks) or (non-overlapping) node labels. This document focuses on more expressive placement constraints in YARN. Such constraints can be crucial for the performance and resilience of applications, especially those that include long-running containers, such as services, machine-learning and streaming workloads.
-
-For example, it may be beneficial to co-locate the allocations of a job on the same rack (*affinity* constraints) to reduce network costs, spread allocations across machines (*anti-affinity* constraints) to minimize resource interference, or allow up to a specific number of allocations in a node group (*cardinality* constraints) to strike a balance between the two. Placement decisions also affect resilience. For example, allocations placed within the same cluster upgrade domain would go offline simultaneously.
-
-The applications can specify constraints without requiring knowledge of the underlying topology of the cluster (e.g., one does not need to specify the specific node or rack where their containers should be placed with constraints) or the other applications deployed. Currently **intra-application** constraints are supported, but the design that is followed is generic and support for constraints across applications will soon be added. Moreover, all constraints at the moment are **hard**, that is, if the constraints for a container cannot be satisfied due to the current cluster condition or conflicting constraints, the container request will remain pending or get will get rejected.
-
-Note that in this document we use the notion of “allocation” to refer to a unit of resources (e.g., CPU and memory) that gets allocated in a node. In the current implementation of YARN, an allocation corresponds to a single container. However, in case an application uses an allocation to spawn more than one containers, an allocation could correspond to multiple containers.
-
-
-Quick Guide
------------
-
-We first describe how to enable scheduling with placement constraints and then provide examples of how to experiment with this feature using the distributed shell, an application that allows to run a given shell command on a set of containers.
-
-### Enabling placement constraints
-
-To enable placement constraints, the following property has to be set to `placement-processor` or `scheduler` in **conf/yarn-site.xml**:
-
-| Property | Description | Default value |
-|:-------- |:----------- |:------------- |
-| `yarn.resourcemanager.placement-constraints.handler` | Specify which handler will be used to process PlacementConstraints. Acceptable values are: `placement-processor`, `scheduler`, and `disabled`. | `disabled` |
-
-We now give more details about each of the three placement constraint handlers:
-
-* `placement-processor`: Using this handler, the placement of containers with constraints is determined as a pre-processing step before the capacity or the fair scheduler is called. Once the placement is decided, the capacity/fair scheduler is invoked to perform the actual allocation. The advantage of this handler is that it supports all constraint types (affinity, anti-affinity, cardinality). Moreover, it considers multiple containers at a time, which allows to satisfy more constraints than a container-at-a-time approach can achieve. As it sits outside the main scheduler, it can be used by both the capacity and fair schedulers. Note that at the moment it does not account for task priorities within an application, given that such priorities might be conflicting with the placement constraints.
-* `scheduler`: Using this handler, containers with constraints will be placed by the main scheduler (as of now, only the capacity scheduler supports SchedulingRequests). It currently supports anti-affinity constraints (no affinity or cardinality). The advantage of this handler, when compared to the `placement-processor`, is that it follows the same ordering rules for queues (sorted by utilization, priority), apps (sorted by FIFO/fairness/priority) and tasks within the same app (priority) that are enforced by the existing main scheduler.
-* `disabled`: Using this handler, if a SchedulingRequest is asked by an application, the corresponding allocate call will be rejected.
-
-The `placement-processor` handler supports a wider range of constraints and can allow more containers to be placed, especially when applications have demanding constraints or the cluster is highly-utilized (due to considering multiple containers at a time). However, if respecting task priority within an application is important for the user and the capacity scheduler is used, then the `scheduler` handler should be used instead.
-
-### Experimenting with placement constraints using distributed shell
-
-Users can experiment with placement constraints by using the distributed shell application through the following command:
-
-```
-$ yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-${project.version}.jar -shell_command sleep -shell_args 10 -placement_spec PlacementSpec
-```
-
-where **PlacementSpec** is of the form:
-
-```
-PlacementSpec => "" | KeyVal;PlacementSpec
-KeyVal        => SourceTag=Constraint
-SourceTag     => String
-Constraint    => NumContainers | NumContainers,"IN",Scope,TargetTag | NumContainers,"NOTIN",Scope,TargetTag | NumContainers,"CARDINALITY",Scope,TargetTag,MinCard,MaxCard
-NumContainers => int
-Scope         => "NODE" | "RACK"
-TargetTag     => String
-MinCard       => int
-MaxCard       => int
-```
-
-Note that when the `-placement_spec` argument is specified in the distributed shell command, the `-num-containers` argument should not be used. In case `-num-containers` argument is used in conjunction with `-placement-spec`, the former is ignored. This is because in PlacementSpec, we determine the number of containers per tag, making the `-num-containers` redundant and possibly conflicting. Moreover, if `-placement_spec` is used, all containers will be requested with GUARANTEED execution type.
-
-An example of PlacementSpec is the following:
-```
-zk=3,NOTIN,NODE,zk:hbase=5,IN,RACK,zk:spark=7,CARDINALITY,NODE,hbase,1,3
-```
-The above encodes two constraints:
-* place 3 containers with tag "zk" (standing for ZooKeeper) with node anti-affinity to each other, i.e., do not place more than one container per node (notice that in this first constraint, the SourceTag and the TargetTag of the constraint coincide);
-* place 5 containers with tag "hbase" with affinity to a rack on which containers with tag "zk" are running (i.e., an "hbase" container should not be placed at a rack where an "zk" container is running, given that "zk" is the TargetTag of the second constraint);
-* place 7 container with tag "spark" in nodes that have at least one, but no more than three, containers, with tag "hbase".
-
-
-
-Defining Placement Constraints
-------------------------------
-
-### Allocation tags
-
-Allocation tags are string tags that an application can associate with (groups of) its containers. Tags are used to identify components of applications. For example, an HBase Master allocation can be tagged with "hbase-m", and Region Servers with "hbase-rs". Other examples are "latency-critical" to refer to the more general demands of the allocation, or "app_0041" to denote the job ID. Allocation tags play a key role in constraints, as they allow to refer to multiple allocations that share a common tag.
-
-Note that instead of using the `ResourceRequest` object to define allocation tags, we use the new `SchedulingRequest` object. This has many similarities with the `ResourceRequest`, but better separates the sizing of the requested allocations (number and size of allocations, priority, execution type, etc.), and the constraints dictating how these allocations should be placed (resource name, relaxed locality). Applications can still use `ResourceRequest` objects, but in order to define allocation tags and constraints, they need to use the `SchedulingRequest` object. Within a single `AllocateRequest`, an application should use either the `ResourceRequest` or the `SchedulingRequest` objects, but not both of them.
-
-#### Differences between node labels, node attributes and allocation tags
-
-The difference between allocation tags and node labels or node attributes (YARN-3409), is that allocation tags are attached to allocations and not to nodes. When an allocation gets allocated to a node by the scheduler, the set of tags of that allocation are automatically added to the node for the duration of the allocation. Hence, a node inherits the tags of the allocations that are currently allocated to the node. Likewise, a rack inherits the tags of its nodes. Moreover, similar to node labels and unlike node attributes, allocation tags have no value attached to them. As we show below, our constraints can refer to allocation tags, as well as node labels and node attributes.
-
-
-### Placement constraints API
-
-Applications can use the public API in the `PlacementConstraints` to construct placement constraint. Before describing the methods for building constraints, we describe the methods of the `PlacementTargets` class that are used to construct the target expressions that will then be used in constraints:
-
-| Method | Description |
-|:------ |:----------- |
-| `allocationTag(String... allocationTags)` | Constructs a target expression on an allocation tag. It is satisfied if there are allocations with one of the given tags. |
-| `allocationTagToIntraApp(String... allocationTags)` | similar to `allocationTag(String...)`, but targeting only the containers of the application that will use this target (intra-application constraints). |
-| `nodePartition(String... nodePartitions)` | Constructs a target expression on a node partition. It is satisfied for nodes that belong to one of the `nodePartitions`. |
-| `nodeAttribute(String attributeKey, String... attributeValues)` | Constructs a target expression on a node attribute. It is satisfied if the specified node attribute has one of the specified values. |
-
-Note that the `nodeAttribute` method above is not yet functional, as it requires the ongoing node attributes feature.
-
-The methods of the `PlacementConstraints` class for building constraints are the following:
-
-| Method | Description |
-|:------ |:----------- |
-| `targetIn(String scope, TargetExpression... targetExpressions)` | Creates a constraint that requires allocations to be placed on nodes that satisfy all target expressions within the given scope (e.g., node or rack). For example, `targetIn(RACK, allocationTag("hbase-m"))`, allows allocations on nodes that belong to a rack that has at least one allocation with tag "hbase-m". |
-| `targetNotIn(String scope, TargetExpression... targetExpressions)` | Creates a constraint that requires allocations to be placed on nodes that belong to a scope (e.g., node or rack) that does not satisfy any of the target expressions. |
-| `cardinality(String scope, int minCardinality, int maxCardinality, String... allocationTags)` | Creates a constraint that restricts the number of allocations within a given scope (e.g., node or rack). For example, {@code cardinality(NODE, 3, 10, "zk")} is satisfied on nodes where there are no less than 3 allocations with tag "zk" and no more than 10. |
-| `minCardinality(String scope, int minCardinality, String... allocationTags)` | Similar to `cardinality(String, int, int, String...)`, but determines only the minimum cardinality (the maximum cardinality is unbound). |
-| `maxCardinality(String scope, int maxCardinality, String... allocationTags)` | Similar to `cardinality(String, int, int, String...)`, but determines only the maximum cardinality (the minimum cardinality is 0). |
-| `targetCardinality(String scope, int minCardinality, int maxCardinality, String... allocationTags)` | This constraint generalizes the cardinality and target constraints. Consider a set of nodes N that belongs to the scope specified in the constraint. If the target expressions are satisfied at least minCardinality times and at most maxCardinality times in the node set N, then the constraint is satisfied. For example, `targetCardinality(RACK, 2, 10, allocationTag("zk"))`, requires an allocation to be placed within a rack that has at least 2 and at most 10 other allocations with tag "zk". |
-
-The `PlacementConstraints` class also includes method for building compound constraints (AND/OR expressions with multiple constraints). Adding support for compound constraints is work in progress.
-
-
-### Specifying constraints in applications
-
-Applications have to specify the containers for which each constraint will be enabled. To this end, applications can provide a mapping from a set of allocation tags (source tags) to a placement constraint. For example, an entry of this mapping could be "hbase"->constraint1, which means that constraint1 will be applied when scheduling each allocation with tag "hbase".
-
-When using the `placement-processor` handler (see [Enabling placement constraints](#Enabling_placement_constraints)), this constraint mapping is specified within the `RegisterApplicationMasterRequest`.
-
-When using the `scheduler` handler, the constraints can also be added at each `SchedulingRequest` object. Each such constraint is valid for the tag of that scheduling request. In case constraints are specified both at the `RegisterApplicationMasterRequest` and the scheduling requests, the latter override the former.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/eea0b069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
new file mode 100644
index 0000000..6af62e7
--- /dev/null
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/PlacementConstraints.md.vm
@@ -0,0 +1,136 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Placement Constraints
+=====================
+
+<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
+
+
+Overview
+--------
+
+YARN allows applications to specify placement constraints in the form of data locality (preference to specific nodes or racks) or (non-overlapping) node labels. This document focuses on more expressive placement constraints in YARN. Such constraints can be crucial for the performance and resilience of applications, especially those that include long-running containers, such as services, machine-learning and streaming workloads.
+
+For example, it may be beneficial to co-locate the allocations of a job on the same rack (*affinity* constraints) to reduce network costs, spread allocations across machines (*anti-affinity* constraints) to minimize resource interference, or allow up to a specific number of allocations in a node group (*cardinality* constraints) to strike a balance between the two. Placement decisions also affect resilience. For example, allocations placed within the same cluster upgrade domain would go offline simultaneously.
+
+The applications can specify constraints without requiring knowledge of the underlying topology of the cluster (e.g., one does not need to specify the specific node or rack where their containers should be placed with constraints) or the other applications deployed. Currently **intra-application** constraints are supported, but the design that is followed is generic and support for constraints across applications will soon be added. Moreover, all constraints at the moment are **hard**, that is, if the constraints for a container cannot be satisfied due to the current cluster condition or conflicting constraints, the container request will remain pending or get will get rejected.
+
+Note that in this document we use the notion of “allocation” to refer to a unit of resources (e.g., CPU and memory) that gets allocated in a node. In the current implementation of YARN, an allocation corresponds to a single container. However, in case an application uses an allocation to spawn more than one containers, an allocation could correspond to multiple containers.
+
+
+Quick Guide
+-----------
+
+We first describe how to enable scheduling with placement constraints and then provide examples of how to experiment with this feature using the distributed shell, an application that allows to run a given shell command on a set of containers.
+
+### Enabling placement constraints
+
+To enable placement constraints, the following property has to be set to `placement-processor` or `scheduler` in **conf/yarn-site.xml**:
+
+| Property | Description | Default value |
+|:-------- |:----------- |:------------- |
+| `yarn.resourcemanager.placement-constraints.handler` | Specify which handler will be used to process PlacementConstraints. Acceptable values are: `placement-processor`, `scheduler`, and `disabled`. | `disabled` |
+
+We now give more details about each of the three placement constraint handlers:
+
+* `placement-processor`: Using this handler, the placement of containers with constraints is determined as a pre-processing step before the capacity or the fair scheduler is called. Once the placement is decided, the capacity/fair scheduler is invoked to perform the actual allocation. The advantage of this handler is that it supports all constraint types (affinity, anti-affinity, cardinality). Moreover, it considers multiple containers at a time, which allows to satisfy more constraints than a container-at-a-time approach can achieve. As it sits outside the main scheduler, it can be used by both the capacity and fair schedulers. Note that at the moment it does not account for task priorities within an application, given that such priorities might be conflicting with the placement constraints.
+* `scheduler`: Using this handler, containers with constraints will be placed by the main scheduler (as of now, only the capacity scheduler supports SchedulingRequests). It currently supports anti-affinity constraints (no affinity or cardinality). The advantage of this handler, when compared to the `placement-processor`, is that it follows the same ordering rules for queues (sorted by utilization, priority), apps (sorted by FIFO/fairness/priority) and tasks within the same app (priority) that are enforced by the existing main scheduler.
+* `disabled`: Using this handler, if a SchedulingRequest is asked by an application, the corresponding allocate call will be rejected.
+
+The `placement-processor` handler supports a wider range of constraints and can allow more containers to be placed, especially when applications have demanding constraints or the cluster is highly-utilized (due to considering multiple containers at a time). However, if respecting task priority within an application is important for the user and the capacity scheduler is used, then the `scheduler` handler should be used instead.
+
+### Experimenting with placement constraints using distributed shell
+
+Users can experiment with placement constraints by using the distributed shell application through the following command:
+
+```
+$ yarn org.apache.hadoop.yarn.applications.distributedshell.Client -jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-${project.version}.jar -shell_command sleep -shell_args 10 -placement_spec PlacementSpec
+```
+
+where **PlacementSpec** is of the form:
+
+```
+PlacementSpec => "" | KeyVal;PlacementSpec
+KeyVal        => SourceTag=Constraint
+SourceTag     => String
+Constraint    => NumContainers | NumContainers,"IN",Scope,TargetTag | NumContainers,"NOTIN",Scope,TargetTag | NumContainers,"CARDINALITY",Scope,TargetTag,MinCard,MaxCard
+NumContainers => int
+Scope         => "NODE" | "RACK"
+TargetTag     => String
+MinCard       => int
+MaxCard       => int
+```
+
+Note that when the `-placement_spec` argument is specified in the distributed shell command, the `-num-containers` argument should not be used. In case `-num-containers` argument is used in conjunction with `-placement-spec`, the former is ignored. This is because in PlacementSpec, we determine the number of containers per tag, making the `-num-containers` redundant and possibly conflicting. Moreover, if `-placement_spec` is used, all containers will be requested with GUARANTEED execution type.
+
+An example of PlacementSpec is the following:
+```
+zk=3,NOTIN,NODE,zk:hbase=5,IN,RACK,zk:spark=7,CARDINALITY,NODE,hbase,1,3
+```
+The above encodes two constraints:
+* place 3 containers with tag "zk" (standing for ZooKeeper) with node anti-affinity to each other, i.e., do not place more than one container per node (notice that in this first constraint, the SourceTag and the TargetTag of the constraint coincide);
+* place 5 containers with tag "hbase" with affinity to a rack on which containers with tag "zk" are running (i.e., an "hbase" container should not be placed at a rack where an "zk" container is running, given that "zk" is the TargetTag of the second constraint);
+* place 7 container with tag "spark" in nodes that have at least one, but no more than three, containers, with tag "hbase".
+
+
+
+Defining Placement Constraints
+------------------------------
+
+### Allocation tags
+
+Allocation tags are string tags that an application can associate with (groups of) its containers. Tags are used to identify components of applications. For example, an HBase Master allocation can be tagged with "hbase-m", and Region Servers with "hbase-rs". Other examples are "latency-critical" to refer to the more general demands of the allocation, or "app_0041" to denote the job ID. Allocation tags play a key role in constraints, as they allow to refer to multiple allocations that share a common tag.
+
+Note that instead of using the `ResourceRequest` object to define allocation tags, we use the new `SchedulingRequest` object. This has many similarities with the `ResourceRequest`, but better separates the sizing of the requested allocations (number and size of allocations, priority, execution type, etc.), and the constraints dictating how these allocations should be placed (resource name, relaxed locality). Applications can still use `ResourceRequest` objects, but in order to define allocation tags and constraints, they need to use the `SchedulingRequest` object. Within a single `AllocateRequest`, an application should use either the `ResourceRequest` or the `SchedulingRequest` objects, but not both of them.
+
+#### Differences between node labels, node attributes and allocation tags
+
+The difference between allocation tags and node labels or node attributes (YARN-3409), is that allocation tags are attached to allocations and not to nodes. When an allocation gets allocated to a node by the scheduler, the set of tags of that allocation are automatically added to the node for the duration of the allocation. Hence, a node inherits the tags of the allocations that are currently allocated to the node. Likewise, a rack inherits the tags of its nodes. Moreover, similar to node labels and unlike node attributes, allocation tags have no value attached to them. As we show below, our constraints can refer to allocation tags, as well as node labels and node attributes.
+
+
+### Placement constraints API
+
+Applications can use the public API in the `PlacementConstraints` to construct placement constraint. Before describing the methods for building constraints, we describe the methods of the `PlacementTargets` class that are used to construct the target expressions that will then be used in constraints:
+
+| Method | Description |
+|:------ |:----------- |
+| `allocationTag(String... allocationTags)` | Constructs a target expression on an allocation tag. It is satisfied if there are allocations with one of the given tags. |
+| `allocationTagToIntraApp(String... allocationTags)` | similar to `allocationTag(String...)`, but targeting only the containers of the application that will use this target (intra-application constraints). |
+| `nodePartition(String... nodePartitions)` | Constructs a target expression on a node partition. It is satisfied for nodes that belong to one of the `nodePartitions`. |
+| `nodeAttribute(String attributeKey, String... attributeValues)` | Constructs a target expression on a node attribute. It is satisfied if the specified node attribute has one of the specified values. |
+
+Note that the `nodeAttribute` method above is not yet functional, as it requires the ongoing node attributes feature.
+
+The methods of the `PlacementConstraints` class for building constraints are the following:
+
+| Method | Description |
+|:------ |:----------- |
+| `targetIn(String scope, TargetExpression... targetExpressions)` | Creates a constraint that requires allocations to be placed on nodes that satisfy all target expressions within the given scope (e.g., node or rack). For example, `targetIn(RACK, allocationTag("hbase-m"))`, allows allocations on nodes that belong to a rack that has at least one allocation with tag "hbase-m". |
+| `targetNotIn(String scope, TargetExpression... targetExpressions)` | Creates a constraint that requires allocations to be placed on nodes that belong to a scope (e.g., node or rack) that does not satisfy any of the target expressions. |
+| `cardinality(String scope, int minCardinality, int maxCardinality, String... allocationTags)` | Creates a constraint that restricts the number of allocations within a given scope (e.g., node or rack). For example, {@code cardinality(NODE, 3, 10, "zk")} is satisfied on nodes where there are no less than 3 allocations with tag "zk" and no more than 10. |
+| `minCardinality(String scope, int minCardinality, String... allocationTags)` | Similar to `cardinality(String, int, int, String...)`, but determines only the minimum cardinality (the maximum cardinality is unbound). |
+| `maxCardinality(String scope, int maxCardinality, String... allocationTags)` | Similar to `cardinality(String, int, int, String...)`, but determines only the maximum cardinality (the minimum cardinality is 0). |
+| `targetCardinality(String scope, int minCardinality, int maxCardinality, String... allocationTags)` | This constraint generalizes the cardinality and target constraints. Consider a set of nodes N that belongs to the scope specified in the constraint. If the target expressions are satisfied at least minCardinality times and at most maxCardinality times in the node set N, then the constraint is satisfied. For example, `targetCardinality(RACK, 2, 10, allocationTag("zk"))`, requires an allocation to be placed within a rack that has at least 2 and at most 10 other allocations with tag "zk". |
+
+The `PlacementConstraints` class also includes method for building compound constraints (AND/OR expressions with multiple constraints). Adding support for compound constraints is work in progress.
+
+
+### Specifying constraints in applications
+
+Applications have to specify the containers for which each constraint will be enabled. To this end, applications can provide a mapping from a set of allocation tags (source tags) to a placement constraint. For example, an entry of this mapping could be "hbase"->constraint1, which means that constraint1 will be applied when scheduling each allocation with tag "hbase".
+
+When using the `placement-processor` handler (see [Enabling placement constraints](#Enabling_placement_constraints)), this constraint mapping is specified within the `RegisterApplicationMasterRequest`.
+
+When using the `scheduler` handler, the constraints can also be added at each `SchedulingRequest` object. Each such constraint is valid for the tag of that scheduling request. In case constraints are specified both at the `RegisterApplicationMasterRequest` and the scheduling requests, the latter override the former.


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org