You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mesos.apache.org by mp...@apache.org on 2016/02/29 09:09:59 UTC

[1/2] mesos git commit: Consistent markdown code style in `docs/persistent-volumes.md`.

Repository: mesos
Updated Branches:
  refs/heads/master e2a3cd63b -> a456e9def


Consistent markdown code style in `docs/persistent-volumes.md`.

Review: https://reviews.apache.org/r/43634/


Project: http://git-wip-us.apache.org/repos/asf/mesos/repo
Commit: http://git-wip-us.apache.org/repos/asf/mesos/commit/ecb125d3
Tree: http://git-wip-us.apache.org/repos/asf/mesos/tree/ecb125d3
Diff: http://git-wip-us.apache.org/repos/asf/mesos/diff/ecb125d3

Branch: refs/heads/master
Commit: ecb125d3797bd1911359ef1e0e9b96eddbc68209
Parents: e2a3cd6
Author: Joerg Schad <jo...@mesosphere.io>
Authored: Mon Feb 29 02:34:44 2016 -0500
Committer: Michael Park <mp...@apache.org>
Committed: Mon Feb 29 03:07:29 2016 -0500

----------------------------------------------------------------------
 docs/persistent-volume.md | 353 ++++++++++++++++++++---------------------
 1 file changed, 168 insertions(+), 185 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/mesos/blob/ecb125d3/docs/persistent-volume.md
----------------------------------------------------------------------
diff --git a/docs/persistent-volume.md b/docs/persistent-volume.md
index e0fe559..4b9c59d 100644
--- a/docs/persistent-volume.md
+++ b/docs/persistent-volume.md
@@ -56,25 +56,23 @@ interfaces described above.
 A framework can create volumes through the resource offer cycle.  Suppose we
 receive a resource offer with 2048 MB of dynamically reserved disk.
 
-```
-{
-  "id" : <offer_id>,
-  "framework_id" : <framework_id>,
-  "slave_id" : <slave_id>,
-  "hostname" : <hostname>,
-  "resources" : [
     {
-      "name" : "disk",
-      "type" : "SCALAR",
-      "scalar" : { "value" : 2048 },
-      "role" : <framework_role>,
-      "reservation" : {
-        "principal" : <framework_principal>
-      }
+      "id" : <offer_id>,
+      "framework_id" : <framework_id>,
+      "slave_id" : <slave_id>,
+      "hostname" : <hostname>,
+      "resources" : [
+        {
+          "name" : "disk",
+          "type" : "SCALAR",
+          "scalar" : { "value" : 2048 },
+          "role" : <framework_role>,
+          "reservation" : {
+            "principal" : <framework_principal>
+          }
+        }
+      ]
     }
-  ]
-}
-```
 
 We can create a persistent volume from the 2048 MB of disk resources by sending
 an `Offer::Operation` message via the `acceptOffers` API.
@@ -85,66 +83,61 @@ volume information. We need to specify the following:
 1. The non-nested relative path within the container to mount the volume.
 1. The permissions for the volume. Currently, `"RW"` is the only possible value.
 
-```
-{
-  "type" : Offer::Operation::CREATE,
-  "create": {
-    "volumes" : [
-      {
-        "name" : "disk",
-        "type" : "SCALAR",
-        "scalar" : { "value" : 2048 },
-        "role" : <framework_role>,
-        "reservation" : {
-          "principal" : <framework_principal>
-        },
-        "disk": {
-          "persistence": {
-            "id" : <persistent_volume_id>
-          },
-          "volume" : {
-            "container_path" : <container_path>,
-            "mode" : <mode>
+        {
+          "type" : Offer::Operation::CREATE,
+          "create": {
+            "volumes" : [
+              {
+                "name" : "disk",
+                "type" : "SCALAR",
+                "scalar" : { "value" : 2048 },
+                "role" : <framework_role>,
+                "reservation" : {
+                  "principal" : <framework_principal>
+                },
+                "disk": {
+                  "persistence": {
+                    "id" : <persistent_volume_id>
+                  },
+                  "volume" : {
+                    "container_path" : <container_path>,
+                    "mode" : <mode>
+                  }
+                }
+              }
+            ]
           }
         }
-      }
-    ]
-  }
-}
-```
 
 If this succeeds, a subsequent resource offer will contain the following
 persistent volume:
 
-```
-{
-  "id" : <offer_id>,
-  "framework_id" : <framework_id>,
-  "slave_id" : <slave_id>,
-  "hostname" : <hostname>,
-  "resources" : [
     {
-      "name" : "disk",
-      "type" : "SCALAR",
-      "scalar" : { "value" : 2048 },
-      "role" : <framework_role>,
-      "reservation" : {
-        "principal" : <framework_principal>
-      },
-      "disk": {
-        "persistence": {
-          "id" : <persistent_volume_id>
-        },
-        "volume" : {
-          "container_path" : <container_path>,
-          "mode" : <mode>
+      "id" : <offer_id>,
+      "framework_id" : <framework_id>,
+      "slave_id" : <slave_id>,
+      "hostname" : <hostname>,
+      "resources" : [
+        {
+          "name" : "disk",
+          "type" : "SCALAR",
+          "scalar" : { "value" : 2048 },
+          "role" : <framework_role>,
+          "reservation" : {
+            "principal" : <framework_principal>
+          },
+          "disk": {
+            "persistence": {
+              "id" : <persistent_volume_id>
+            },
+            "volume" : {
+              "container_path" : <container_path>,
+              "mode" : <mode>
+            }
+          }
         }
-      }
+      ]
     }
-  ]
-}
-```
-
 
 #### `Offer::Operation::Destroy`
 
@@ -154,90 +147,84 @@ volume from 2048 MB of disk resources. Mesos will not garbage-collect this
 volume until we explicitly destroy it. Suppose we would like to destroy the
 volume we created. First, we receive a resource offer (copy/pasted from above):
 
-```
-{
-  "id" : <offer_id>,
-  "framework_id" : <framework_id>,
-  "slave_id" : <slave_id>,
-  "hostname" : <hostname>,
-  "resources" : [
     {
-      "name" : "disk",
-      "type" : "SCALAR",
-      "scalar" : { "value" : 2048 },
-      "role" : <framework_role>,
-      "reservation" : {
-        "principal" : <framework_principal>
-      },
-      "disk": {
-        "persistence": {
-          "id" : <persistent_volume_id>
-        },
-        "volume" : {
-          "container_path" : <container_path>,
-          "mode" : <mode>
+      "id" : <offer_id>,
+      "framework_id" : <framework_id>,
+      "slave_id" : <slave_id>,
+      "hostname" : <hostname>,
+      "resources" : [
+        {
+          "name" : "disk",
+          "type" : "SCALAR",
+          "scalar" : { "value" : 2048 },
+          "role" : <framework_role>,
+          "reservation" : {
+            "principal" : <framework_principal>
+          },
+          "disk": {
+            "persistence": {
+              "id" : <persistent_volume_id>
+            },
+            "volume" : {
+              "container_path" : <container_path>,
+              "mode" : <mode>
+            }
+          }
         }
-      }
+      ]
     }
-  ]
-}
-```
 
 We destroy the persistent volume by sending the `Offer::Operation` message via
 the `acceptOffers` API. `Offer::Operation::Destroy` has a `volumes` field which
 specifies the persistent volumes to be destroyed.
 
-```
-{
-  "type" : Offer::Operation::DESTROY,
-  "destroy" : {
-    "volumes" : [
-      {
-        "name" : "disk",
-        "type" : "SCALAR",
-        "scalar" : { "value" : 2048 },
-        "role" : <framework_role>,
-        "reservation" : {
-          "principal" : <framework_principal>
-        },
-        "disk": {
-          "persistence": {
-            "id" : <persistent_volume_id>
-          },
-          "volume" : {
-            "container_path" : <container_path>,
-            "mode" : <mode>
+    {
+      "type" : Offer::Operation::DESTROY,
+      "destroy" : {
+        "volumes" : [
+          {
+            "name" : "disk",
+            "type" : "SCALAR",
+            "scalar" : { "value" : 2048 },
+            "role" : <framework_role>,
+            "reservation" : {
+              "principal" : <framework_principal>
+            },
+            "disk": {
+              "persistence": {
+                "id" : <persistent_volume_id>
+              },
+              "volume" : {
+                "container_path" : <container_path>,
+                "mode" : <mode>
+              }
+            }
           }
-        }
+        ]
       }
-    ]
-  }
-}
-```
+    }
 
 If this request succeeds, the persistent volume will be destroyed but the disk
 resources will still be reserved. As such, a subsequent resource offer will
 contain the following reserved disk resources:
 
-```
-{
-  "id" : <offer_id>,
-  "framework_id" : <framework_id>,
-  "slave_id" : <slave_id>,
-  "hostname" : <hostname>,
-  "resources" : [
     {
-      "name" : "disk",
-      "type" : "SCALAR",
-      "scalar" : { "value" : 2048 },
-      "role" : <framework_role>,
-      "reservation" : {
-        "principal" : <framework_principal>
-      }
+      "id" : <offer_id>,
+      "framework_id" : <framework_id>,
+      "slave_id" : <slave_id>,
+      "hostname" : <hostname>,
+      "resources" : [
+        {
+          "name" : "disk",
+          "type" : "SCALAR",
+          "scalar" : { "value" : 2048 },
+          "role" : <framework_role>,
+          "reservation" : {
+            "principal" : <framework_principal>
+          }
+        }
+      ]
     }
-  ]
-}
-```
 
 Those reserved resources can then be used as normal: e.g., they can be used to
 create another persistent volume or can be unreserved.
@@ -266,32 +253,30 @@ To create a 512MB persistent volume for the `ads` role on a dynamically reserved
 disk resource, we can send an HTTP POST request to the master's
 [/create-volumes](endpoints/master/create-volumes.md) endpoint like so:
 
-```
-curl -i \
-     -u <operator_principal>:<password> \
-     -d slaveId=<slave_id> \
-     -d volumes='[
-       {
-         "name": "disk",
-         "type": "SCALAR",
-         "scalar": { "value": 512 },
-         "role": "ads",
-         "reservation": {
-           "principal": <operator_principal>
-         },
-         "disk": {
-           "persistence": {
-             "id" : <persistence_id>
-           },
-           "volume": {
-             "mode": "RW",
-             "container_path": <path>
+    curl -i \
+         -u <operator_principal>:<password> \
+         -d slaveId=<slave_id> \
+         -d volumes='[
+           {
+             "name": "disk",
+             "type": "SCALAR",
+             "scalar": { "value": 512 },
+             "role": "ads",
+             "reservation": {
+               "principal": <operator_principal>
+             },
+             "disk": {
+               "persistence": {
+                 "id" : <persistence_id>
+               },
+               "volume": {
+                 "mode": "RW",
+                 "container_path": <path>
+               }
+             }
            }
-         }
-       }
-     ]' \
-     -X POST http://<ip>:<port>/master/create-volumes
-```
+         ]' \
+         -X POST http://<ip>:<port>/master/create-volumes
 
 The user receives one of the following HTTP responses:
 
@@ -319,32 +304,30 @@ user can examine the state of the appropriate Mesos slave (e.g., via the slave's
 To destroy the volume created above, we can send an HTTP POST to the master's
 [/destroy-volumes](endpoints/master/destroy-volumes.md) endpoint like so:
 
-```
-curl -i \
-     -u <operator_principal>:<password> \
-     -d slaveId=<slave_id> \
-     -d volumes='[
-       {
-         "name": "disk",
-         "type": "SCALAR",
-         "scalar": { "value": 512 },
-         "role": "ads",
-         "reservation": {
-           "principal": <operator_principal>
-         },
-         "disk": {
-           "persistence": {
-             "id" : <persistence_id>
-           },
-           "volume": {
-             "mode": "RW",
-             "container_path": <path>
+    curl -i \
+         -u <operator_principal>:<password> \
+         -d slaveId=<slave_id> \
+         -d volumes='[
+           {
+             "name": "disk",
+             "type": "SCALAR",
+             "scalar": { "value": 512 },
+             "role": "ads",
+             "reservation": {
+               "principal": <operator_principal>
+             },
+             "disk": {
+               "persistence": {
+                 "id" : <persistence_id>
+               },
+               "volume": {
+                 "mode": "RW",
+                 "container_path": <path>
+               }
+             }
            }
-         }
-       }
-     ]' \
-     -X POST http://<ip>:<port>/master/destroy-volumes
-```
+         ]' \
+         -X POST http://<ip>:<port>/master/destroy-volumes
 
 The user receives one of the following HTTP responses:
 


[2/2] mesos git commit: Made bullet point structure consistent in `docs/upgrades.md`.

Posted by mp...@apache.org.
Made bullet point structure consistent in `docs/upgrades.md`.

Review: https://reviews.apache.org/r/43792/


Project: http://git-wip-us.apache.org/repos/asf/mesos/repo
Commit: http://git-wip-us.apache.org/repos/asf/mesos/commit/a456e9de
Tree: http://git-wip-us.apache.org/repos/asf/mesos/tree/a456e9de
Diff: http://git-wip-us.apache.org/repos/asf/mesos/diff/a456e9de

Branch: refs/heads/master
Commit: a456e9def13a0d8e7343354c41e1a5c1a9bd0894
Parents: ecb125d
Author: Joerg Schad <jo...@mesosphere.io>
Authored: Mon Feb 29 02:44:26 2016 -0500
Committer: Michael Park <mp...@apache.org>
Committed: Mon Feb 29 03:07:30 2016 -0500

----------------------------------------------------------------------
 docs/upgrades.md | 325 +++++++++++++++++++++++++-------------------------
 1 file changed, 165 insertions(+), 160 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/mesos/blob/a456e9de/docs/upgrades.md
----------------------------------------------------------------------
diff --git a/docs/upgrades.md b/docs/upgrades.md
index 177fd9a..b22e87f 100644
--- a/docs/upgrades.md
+++ b/docs/upgrades.md
@@ -29,104 +29,104 @@ This document serves as a guide for users who wish to upgrade an existing Mesos
 
 ## Upgrading from 0.25.x to 0.26.x ##
 
-**NOTE** The names of some TaskStatus::Reason enums have been changed. But the tag numbers remain unchanged, so it is backwards compatible. Frameworks using the new version might need to do some compile time adjustments:
+* The names of some TaskStatus::Reason enums have been changed. But the tag numbers remain unchanged, so it is backwards compatible. Frameworks using the new version might need to do some compile time adjustments:
 
-* REASON_MEM_LIMIT -> REASON_CONTAINER_LIMITATION_MEMORY
-* REASON_EXECUTOR_PREEMPTED -> REASON_CONTAINER_PREEMPTED
+  * REASON_MEM_LIMIT -> REASON_CONTAINER_LIMITATION_MEMORY
+  * REASON_EXECUTOR_PREEMPTED -> REASON_CONTAINER_PREEMPTED
 
-**NOTE** The `Credential` protobuf has been changed. `Credential` field `secret` is now a string, it used to be bytes. This will affect framework developers and language bindings ought to update their generated protobuf with the new version. This fixes JSON based credentials file support.
+* The `Credential` protobuf has been changed. `Credential` field `secret` is now a string, it used to be bytes. This will affect framework developers and language bindings ought to update their generated protobuf with the new version. This fixes JSON based credentials file support.
 
-**NOTE** The `/state` endpoints on master and slave will no longer include `data` fields as part of the JSON models for `ExecutorInfo` and `TaskInfo` out of consideration for memory scalability (see [MESOS-3794](https://issues.apache.org/jira/browse/MESOS-3794) and [this email thread](http://www.mail-archive.com/dev@mesos.apache.org/msg33536.html)).
-On master, the affected `data` field was originally found via `frameworks[*].executors[*].data`.
-On slaves, the affected `data` field was originally found via `executors[*].tasks[*].data`.
+* The `/state` endpoints on master and slave will no longer include `data` fields as part of the JSON models for `ExecutorInfo` and `TaskInfo` out of consideration for memory scalability (see [MESOS-3794](https://issues.apache.org/jira/browse/MESOS-3794) and [this email thread](http://www.mail-archive.com/dev@mesos.apache.org/msg33536.html)).
+  * On master, the affected `data` field was originally found via `frameworks[*].executors[*].data`.
+  * On slaves, the affected `data` field was originally found via `executors[*].tasks[*].data`.
 
-**NOTE** The `NetworkInfo` protobuf has been changed. The fields `protocol` and `ip_address` are now deprecated. The new field `ip_addresses` subsumes the information provided by them.
+* The `NetworkInfo` protobuf has been changed. The fields `protocol` and `ip_address` are now deprecated. The new field `ip_addresses` subsumes the information provided by them.
 
 ## Upgrading from 0.24.x to 0.25.x
 
-**NOTE** The following endpoints will be deprecated in favor of new endpoints. Both versions will be available in 0.25 but the deprecated endpoints will be removed in a subsequent release.
+* The following endpoints will be deprecated in favor of new endpoints. Both versions will be available in 0.25 but the deprecated endpoints will be removed in a subsequent release.
 
-For master endpoints:
+  For master endpoints:
 
-* /state.json becomes /state
-* /tasks.json becomes /tasks
+  * /state.json becomes /state
+  * /tasks.json becomes /tasks
 
-For slave endpoints:
+  For slave endpoints:
 
-* /state.json becomes /state
-* /monitor/statistics.json becomes /monitor/statistics
+  * /state.json becomes /stateπ
+  * /monitor/statistics.json becomes /monitor/statisticsπ
 
-For both master and slave:
+  For both master and slave:
 
-* /files/browse.json becomes /files/browse
-* /files/debug.json becomes /files/debug
-* /files/download.json becomes /files/download
-* /files/read.json becomes /files/read
+  * /files/browse.json becomes /files/browse
+  * /files/debug.json becomes /files/debug
+  * /files/download.json becomes /files/download
+  * /files/read.json becomes /files/read
 
-**NOTE** The C++/Java/Python scheduler bindings have been updated. In particular, the driver can make a suppressOffers() call to stop receiving offers (until reviveOffers() is called).
+* The C++/Java/Python scheduler bindings have been updated. In particular, the driver can make a suppressOffers() call to stop receiving offers (until reviveOffers() is called).
 
 In order to upgrade a running cluster:
 
-* Rebuild and install any modules so that upgraded masters/slaves can use them.
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers by linking the latest native library / jar / egg (if necessary).
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library / jar / egg (if necessary).
+1. Rebuild and install any modules so that upgraded masters/slaves can use them.
+2. Install the new master binaries and restart the masters.
+3. Install the new slave binaries and restart the slaves.
+4. Upgrade the schedulers by linking the latest native library / jar / egg (if necessary).
+5. Restart the schedulers.
+6. Upgrade the executors by linking the latest native library / jar / egg (if necessary).
 
 
 ## Upgrading from 0.23.x to 0.24.x
 
-**NOTE** Support for live upgrading a driver based scheduler to HTTP based (experimental) scheduler has been added.
+* Support for live upgrading a driver based scheduler to HTTP based (experimental) scheduler has been added.
 
-**NOTE** Master now publishes its information in ZooKeeper in JSON (instead of protobuf). Make sure schedulers are linked against >= 0.23.0 libmesos before upgrading the master.
+* Master now publishes its information in ZooKeeper in JSON (instead of protobuf). Make sure schedulers are linked against >= 0.23.0 libmesos before upgrading the master.
 
 In order to upgrade a running cluster:
 
-* Rebuild and install any modules so that upgraded masters/slaves can use them.
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers by linking the latest native library / jar / egg (if necessary).
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library / jar / egg (if necessary).
+1. Rebuild and install any modules so that upgraded masters/slaves can use them.
+2. Install the new master binaries and restart the masters.
+3. Install the new slave binaries and restart the slaves.
+4. Upgrade the schedulers by linking the latest native library / jar / egg (if necessary).
+5. Restart the schedulers.
+6. Upgrade the executors by linking the latest native library / jar / egg (if necessary).
 
 
 ## Upgrading from 0.22.x to 0.23.x
 
-**NOTE** The 'stats.json' endpoints for masters and slaves have been removed. Please use the 'metrics/snapshot' endpoints instead.
+* The 'stats.json' endpoints for masters and slaves have been removed. Please use the 'metrics/snapshot' endpoints instead.
 
-**NOTE** The '/master/shutdown' endpoint is deprecated in favor of the new '/master/teardown' endpoint.
+* The '/master/shutdown' endpoint is deprecated in favor of the new '/master/teardown' endpoint.
 
-**NOTE** In order to enable decorator modules to remove metadata (environment variables or labels), we changed the meaning of the return value for decorator hooks in Mesos 0.23.0. Please refer to the modules documentation for more details.
+* In order to enable decorator modules to remove metadata (environment variables or labels), we changed the meaning of the return value for decorator hooks in Mesos 0.23.0. Please refer to the modules documentation for more details.
 
-**NOTE** Slave ping timeouts are now configurable on the master via `--slave_ping_timeout` and `--max_slave_ping_timeouts`. Slaves should be upgraded to 0.23.x before changing these flags.
+* Slave ping timeouts are now configurable on the master via `--slave_ping_timeout` and `--max_slave_ping_timeouts`. Slaves should be upgraded to 0.23.x before changing these flags.
 
-**NOTE** A new scheduler driver API, `acceptOffers`, has been introduced. This is a more general version of the `launchTasks` API, which allows the scheduler to accept an offer and specify a list of operations (Offer.Operation) to perform using the resources in the offer. Currently, the supported operations include LAUNCH (launching tasks), RESERVE (making dynamic reservations), UNRESERVE (releasing dynamic reservations), CREATE (creating persistent volumes) and DESTROY (releasing persistent volumes). Similar to the `launchTasks` API, any unused resources will be considered declined, and the specified filters will be applied on all unused resources.
+* A new scheduler driver API, `acceptOffers`, has been introduced. This is a more general version of the `launchTasks` API, which allows the scheduler to accept an offer and specify a list of operations (Offer.Operation) to perform using the resources in the offer. Currently, the supported operations include LAUNCH (launching tasks), RESERVE (making dynamic reservations), UNRESERVE (releasing dynamic reservations), CREATE (creating persistent volumes) and DESTROY (releasing persistent volumes). Similar to the `launchTasks` API, any unused resources will be considered declined, and the specified filters will be applied on all unused resources.
 
-**NOTE** The Resource protobuf has been extended to include more metadata for supporting persistence (DiskInfo), dynamic reservations (ReservationInfo) and oversubscription (RevocableInfo). You must not combine two Resource objects if they have different metadata.
+* The Resource protobuf has been extended to include more metadata for supporting persistence (DiskInfo), dynamic reservations (ReservationInfo) and oversubscription (RevocableInfo). You must not combine two Resource objects if they have different metadata.
 
 In order to upgrade a running cluster:
 
-* Rebuild and install any modules so that upgraded masters/slaves can use them.
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers by linking the latest native library / jar / egg (if necessary).
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library / jar / egg (if necessary).
+1. Rebuild and install any modules so that upgraded masters/slaves can use them.
+2. Install the new master binaries and restart the masters.
+3. Install the new slave binaries and restart the slaves.
+4. Upgrade the schedulers by linking the latest native library / jar / egg (if necessary).
+5. Restart the schedulers.
+6. Upgrade the executors by linking the latest native library / jar / egg (if necessary).
 
 
 ## Upgrading from 0.21.x to 0.22.x
 
-**NOTE** Slave checkpoint flag has been removed as it will be enabled for all
+* Slave checkpoint flag has been removed as it will be enabled for all
 slaves. Frameworks must still enable checkpointing during registration to take advantage
 of checkpointing their tasks.
 
-**NOTE** The stats.json endpoints for masters and slaves have been deprecated.
+* The stats.json endpoints for masters and slaves have been deprecated.
 Please refer to the metrics/snapshot endpoint.
 
-**NOTE** The C++/Java/Python scheduler bindings have been updated. In particular, the driver can be constructed with an additional argument that specifies whether to use implicit driver acknowledgements. In `statusUpdate`, the `TaskStatus` now includes a UUID to make explicit acknowledgements possible.
+* The C++/Java/Python scheduler bindings have been updated. In particular, the driver can be constructed with an additional argument that specifies whether to use implicit driver acknowledgements. In `statusUpdate`, the `TaskStatus` now includes a UUID to make explicit acknowledgements possible.
 
-**NOTE**: The Authentication API has changed slightly in this release to support additional authentication mechanisms. The change from 'string' to 'bytes' for AuthenticationStartMessage.data has no impact on C++ or the over-the-wire representation, so it only impacts pure language bindings for languages like Java and Python that use different types for UTF-8 strings vs. byte arrays.
+* The Authentication API has changed slightly in this release to support additional authentication mechanisms. The change from 'string' to 'bytes' for AuthenticationStartMessage.data has no impact on C++ or the over-the-wire representation, so it only impacts pure language bindings for languages like Java and Python that use different types for UTF-8 strings vs. byte arrays.
 
     message AuthenticationStartMessage {
       required string mechanism = 1;
@@ -134,60 +134,60 @@ Please refer to the metrics/snapshot endpoint.
     }
 
 
-**NOTE** All Mesos arguments can now be passed using file:// to read them out of a file (either an absolute or relative path). The --credentials, --whitelist, and any flags that expect JSON backed arguments (such as --modules) behave as before, although support for just passing an absolute path for any JSON flags rather than file:// has been deprecated and will produce a warning (and the absolute path behavior will be removed in a future release).
+* All Mesos arguments can now be passed using file:// to read them out of a file (either an absolute or relative path). The --credentials, --whitelist, and any flags that expect JSON backed arguments (such as --modules) behave as before, although support for just passing an absolute path for any JSON flags rather than file:// has been deprecated and will produce a warning (and the absolute path behavior will be removed in a future release).
 
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers:
-  * For Java schedulers, link the new native library against the new JAR. The JAR contains API changes per the **NOTE** above. A 0.21.0 JAR will work with a 0.22.0 libmesos. A 0.22.0 JAR will work with a 0.21.0 libmesos if explicit acks are not being used. 0.22.0 and 0.21.0 are inter-operable at the protocol level between the master and the scheduler.
+1. Install the new master binaries and restart the masters.
+2. Install the new slave binaries and restart the slaves.
+3. Upgrade the schedulers:
+  * For Java schedulers, link the new native library against the new JAR. The JAR contains API above changes. A 0.21.0 JAR will work with a 0.22.0 libmesos. A 0.22.0 JAR will work with a 0.21.0 libmesos if explicit acks are not being used. 0.22.0 and 0.21.0 are inter-operable at the protocol level between the master and the scheduler.
   * For Python schedulers, upgrade to use a 0.22.0 egg. If constructing `MesosSchedulerDriverImpl` with `Credentials`, your code must be updated to pass the `implicitAcknowledgements` argument before `Credentials`. You may run a 0.21.0 Python scheduler against a 0.22.0 master, and vice versa.
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library / jar / egg.
+4. Restart the schedulers.
+5. Upgrade the executors by linking the latest native library / jar / egg.
 
 
 ## Upgrading from 0.20.x to 0.21.x
 
-**NOTE** Disabling slave checkpointing has been deprecated; the slave --checkpoint flag has been deprecated and will be removed in a future release.
+* Disabling slave checkpointing has been deprecated; the slave --checkpoint flag has been deprecated and will be removed in a future release.
 
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers by linking the latest native library (mesos jar upgrade not necessary).
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+1. Install the new master binaries and restart the masters.
+2. Install the new slave binaries and restart the slaves.
+3. Upgrade the schedulers by linking the latest native library (mesos jar upgrade not necessary).
+4. Restart the schedulers.
+5. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
 
 
 ## Upgrading from 0.19.x to 0.20.x.
 
-**NOTE**: The Mesos API has been changed slightly in this release. The CommandInfo has been changed (see below), which makes launching a command more flexible. The 'value' field has been changed from _required_ to _optional_. However, it will not cause any issue during the upgrade (since the existing schedulers always set this field).
-
-    message CommandInfo {
-      ...
-      // There are two ways to specify the command:
-      // 1) If 'shell == true', the command will be launched via shell
-      //    (i.e., /bin/sh -c 'value'). The 'value' specified will be
-      //    treated as the shell command. The 'arguments' will be ignored.
-      // 2) If 'shell == false', the command will be launched by passing
-      //    arguments to an executable. The 'value' specified will be
-      //    treated as the filename of the executable. The 'arguments'
-      //    will be treated as the arguments to the executable. This is
-      //    similar to how POSIX exec families launch processes (i.e.,
-      //    execlp(value, arguments(0), arguments(1), ...)).
-      optional bool shell = 6 [default = true];
-      optional string value = 3;
-      repeated string arguments = 7;
-      ...
-    }
-
-**NOTE**: The Python bindings are also changing in this release. There are now sub-modules which allow you to use either the interfaces and/or the native driver.
-
-* `import mesos.native` for the native drivers
-* `import mesos.interface` for the stub implementations and protobufs
-
-To ensure a smooth upgrade, we recommend to upgrade your python framework and executor first. You will be able to either import using the new configuration or the old. Replace the existing imports with something like the following:
+* The Mesos API has been changed slightly in this release. The CommandInfo has been changed (see below), which makes launching a command more flexible. The 'value' field has been changed from _required_ to _optional_. However, it will not cause any issue during the upgrade (since the existing schedulers always set this field).
+
+        message CommandInfo {
+          ...
+          // There are two ways to specify the command:
+          // 1) If 'shell == true', the command will be launched via shell
+          //    (i.e., /bin/sh -c 'value'). The 'value' specified will be
+          //    treated as the shell command. The 'arguments' will be ignored.
+          // 2) If 'shell == false', the command will be launched by passing
+          //    arguments to an executable. The 'value' specified will be
+          //    treated as the filename of the executable. The 'arguments'
+          //    will be treated as the arguments to the executable. This is
+          //    similar to how POSIX exec families launch processes (i.e.,
+          //    execlp(value, arguments(0), arguments(1), ...)).
+          optional bool shell = 6 [default = true];
+          optional string value = 3;
+          repeated string arguments = 7;
+          ...
+        }
+
+* The Python bindings are also changing in this release. There are now sub-modules which allow you to use either the interfaces and/or the native driver.
+
+  * `import mesos.native` for the native drivers
+  * `import mesos.interface` for the stub implementations and protobufs
+
+  To ensure a smooth upgrade, we recommend to upgrade your python framework and executor first. You will be able to either import using the new configuration or the old. Replace the existing imports with something like the following:
 
     try:
         from mesos.native import MesosExecutorDriver, MesosSchedulerDriver
@@ -197,123 +197,128 @@ To ensure a smooth upgrade, we recommend to upgrade your python framework and ex
         from mesos import Executor, MesosExecutorDriver, MesosSchedulerDriver, Scheduler
         import mesos_pb2
 
-**NOTE**: If you're using a pure language binding, please ensure that it sends status update acknowledgements through the master before upgrading.
+* If you're using a pure language binding, please ensure that it sends status update acknowledgements through the master before upgrading.
 
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers by linking the latest native library (install the latest mesos jar and python egg if necessary).
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library (install the latest mesos jar and python egg if necessary).
+1. Install the new master binaries and restart the masters.
+2. Install the new slave binaries and restart the slaves.
+3. Upgrade the schedulers by linking the latest native library (install the latest mesos jar and python egg if necessary).
+4. Restart the schedulers.
+5. Upgrade the executors by linking the latest native library (install the latest mesos jar and python egg if necessary).
 
 ## Upgrading from 0.18.x to 0.19.x.
 
-**NOTE**: There are new required flags on the master (`--work_dir` and `--quorum`) to support the *Registrar* feature, which adds replicated state on the masters.
+* There are new required flags on the master (`--work_dir` and `--quorum`) to support the *Registrar* feature, which adds replicated state on the masters.
 
-**NOTE**: No required upgrade ordering across components.
+* No required upgrade ordering across components.
 
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the schedulers by linking the latest native library (mesos jar upgrade not necessary).
-* Restart the schedulers.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+1. Install the new master binaries and restart the masters.
+2. Install the new slave binaries and restart the slaves.
+3. Upgrade the schedulers by linking the latest native library (mesos jar upgrade not necessary).
+4. Restart the schedulers.
+5. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
 
 
 ## Upgrading from 0.17.0 to 0.18.x.
 
-In order to upgrade a running cluster:
+* This upgrade requires a system reboot for slaves that use Linux cgroups for isolation.
 
-**NOTE**: This upgrade requires a system reboot for slaves that use Linux cgroups for isolation.
+In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
-* Restart the schedulers.
-* Install the new slave binaries then perform one of the following two steps, depending on if cgroups isolation is used:
+1. Install the new master binaries and restart the masters.
+2. Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
+3. Restart the schedulers.
+4. Install the new slave binaries then perform one of the following two steps, depending on if cgroups isolation is used:
   * [no cgroups]
-      - Restart the slaves. The "--isolation" flag has changed and "process" has been deprecated in favor of "posix/cpu,posix/mem".
+    - Restart the slaves. The "--isolation" flag has changed and "process" has been deprecated in favor of "posix/cpu,posix/mem".
   * [cgroups]
-      - Change from a single mountpoint for all controllers to separate mountpoints for each controller, e.g., /sys/fs/cgroup/memory/ and /sys/fs/cgroup/cpu/.
-      - The suggested configuration is to mount a tmpfs filesystem to /sys/fs/cgroup and to let the slave mount the required controllers. However, the slave will also use previously mounted controllers if they are appropriately mounted under "--cgroups_hierarchy".
-      - It has been observed that unmounting and remounting of cgroups from the single to separate configuration is unreliable and a reboot into the new configuration is strongly advised. Restart the slaves after reboot.
-      - The "--cgroups_hierarchy" now defaults to "/sys/fs/cgroup". The "--cgroups_root" flag default remains "mesos".
-      -  The "--isolation" flag has changed and "cgroups" has been deprecated in favor of "cgroups/cpu,cgroups/mem".
-      - The "--cgroup_subsystems" flag is no longer required and will be ignored.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+    - Change from a single mountpoint for all controllers to separate mountpoints for each controller, e.g., /sys/fs/cgroup/memory/ and /sys/fs/cgroup/cpu/.
+    - The suggested configuration is to mount a tmpfs filesystem to /sys/fs/cgroup and to let the slave mount the required controllers. However, the slave will also use previously mounted controllers if they are appropriately mounted under "--cgroups_hierarchy".
+    - It has been observed that unmounting and remounting of cgroups from the single to separate configuration is unreliable and a reboot into the new configuration is strongly advised. Restart the slaves after reboot.
+    - The "--cgroups_hierarchy" now defaults to "/sys/fs/cgroup". The "--cgroups_root" flag default remains "mesos".
+    -  The "--isolation" flag has changed and "cgroups" has been deprecated in favor of "cgroups/cpu,cgroups/mem".
+    - The "--cgroup_subsystems" flag is no longer required and will be ignored.
+5. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
 
 
 ## Upgrading from 0.16.0 to 0.17.0.
 
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
-* Restart the schedulers.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+1. Install the new master binaries and restart the masters.
+2. Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
+3. Restart the schedulers.
+4. Install the new slave binaries and restart the slaves.
+5. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
 
 
 ## Upgrading from 0.15.0 to 0.16.0.
 
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
-* Restart the schedulers.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+1. Install the new master binaries and restart the masters.
+2. Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
+3. Restart the schedulers.
+4. Install the new slave binaries and restart the slaves.
+5. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
 
 
 ## Upgrading from 0.14.0 to 0.15.0.
 
+* Schedulers should implement the new `reconcileTasks` driver method.
+* Schedulers should call the new `MesosSchedulerDriver` constructor that takes `Credential` to authenticate.
+* --authentication=false (default) allows both authenticated and unauthenticated frameworks to register.
+
 In order to upgrade a running cluster:
 
-* Install the new master binaries.
-* Restart the masters with --credentials pointing to credentials of the framework(s).
-* NOTE: --authentication=false (default) allows both authenticated and unauthenticated frameworks to register.
-* Install the new slave binaries and restart the slaves.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
-* Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
-* NOTE: Schedulers should implement the new `reconcileTasks` driver method.
-* Schedulers should call the new `MesosSchedulerDriver` constructor that takes `Credential` to authenticate.
-* Restart the schedulers.
-* Restart the masters with --authentication=true.
-* NOTE: After the restart unauthenticated frameworks *will not* be allowed to register.
+1. Install the new master binaries.
+2. Restart the masters with --credentials pointing to credentials of the framework(s).
+3. Install the new slave binaries and restart the slaves.
+4. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+5. Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
+6. Restart the schedulers.
+ Restart the masters with --authentication=true.
+
+NOTE: After the restart unauthenticated frameworks *will not* be allowed to register.
 
 
 ## Upgrading from 0.13.0 to 0.14.0.
 
+* /vars endpoint has been removed.
+
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* NOTE: /vars endpoint has been removed.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
-* Install the new slave binaries.
-* Restart the slaves after adding --checkpoint flag to enable checkpointing.
-* NOTE: /vars endpoint has been removed.
-* Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
-* Set FrameworkInfo.checkpoint in the scheduler if checkpointing is desired (recommended).
-* Restart the schedulers.
-* Restart the masters (to get rid of the cached FrameworkInfo).
-* Restart the slaves (to get rid of the cached FrameworkInfo).
+1. Install the new master binaries and restart the masters.
+2. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+3. Install the new slave binaries.
+4. Restart the slaves after adding --checkpoint flag to enable checkpointing.
+5. Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
+6. Set FrameworkInfo.checkpoint in the scheduler if checkpointing is desired (recommended).
+7. Restart the schedulers.
+8. Restart the masters (to get rid of the cached FrameworkInfo).
+9. Restart the slaves (to get rid of the cached FrameworkInfo).
 
 ## Upgrading from 0.12.0 to 0.13.0.
+
+* cgroups_hierarchy_root slave flag is renamed as cgroups_hierarchy
+
 In order to upgrade a running cluster:
 
-* Install the new master binaries and restart the masters.
-* Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
-* Restart the schedulers.
-* Install the new slave binaries.
-* NOTE: cgroups_hierarchy_root slave flag is renamed as cgroups_hierarchy
-* Restart the slaves.
-* Upgrade the executors by linking the latest native library and mesos jar (if necessary).
+1. Install the new master binaries and restart the masters.
+2. Upgrade the schedulers by linking the latest native library and mesos jar (if necessary).
+3. Restart the schedulers.
+4. Install the new slave binaries.
+5. Restart the slaves.
+6. Upgrade the executors by linking the latest native library and mesos jar (if necessary).
 
 ## Upgrading from 0.11.0 to 0.12.0.
-In order to upgrade a running cluster:
 
-* Install the new slave binaries and restart the slaves.
-* Install the new master binaries and restart the masters.
+* If you are a framework developer, you will want to examine the new 'source' field in the ExecutorInfo protobuf. This will allow you to take further advantage of the resource monitoring.
+
+In order to upgrade a running cluster:
 
-If you are a framework developer, you will want to examine the new 'source' field in the ExecutorInfo protobuf. This will allow you to take further advantage of the resource monitoring.
+1. Install the new slave binaries and restart the slaves.
+2. Install the new master binaries and restart the masters.