You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@aurora.apache.org by wf...@apache.org on 2014/03/03 22:54:09 UTC

git commit: Simplfiy TOC and anchors in remaining docs, among other niceties.

Repository: incubator-aurora
Updated Branches:
  refs/heads/master 4d618aba8 -> fc3cbb21f


Simplfiy TOC and anchors in remaining docs, among other niceties.

Reviewed at https://reviews.apache.org/r/18562/


Project: http://git-wip-us.apache.org/repos/asf/incubator-aurora/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-aurora/commit/fc3cbb21
Tree: http://git-wip-us.apache.org/repos/asf/incubator-aurora/tree/fc3cbb21
Diff: http://git-wip-us.apache.org/repos/asf/incubator-aurora/diff/fc3cbb21

Branch: refs/heads/master
Commit: fc3cbb21f489222c676b3dab2935343cdfb0e5a9
Parents: 4d618ab
Author: Bill Farner <wf...@apache.org>
Authored: Mon Mar 3 13:53:03 2014 -0800
Committer: Bill Farner <wf...@apache.org>
Committed: Mon Mar 3 13:53:03 2014 -0800

----------------------------------------------------------------------
 docs/clientcommands.md        | 254 +++++++++++++++++++------------------
 docs/configurationtutorial.md |  97 +++++++-------
 docs/hooks.md                 | 148 ++++++++++-----------
 docs/resourceisolation.md     |  34 +++--
 docs/tutorial.md              | 158 ++++++++++++-----------
 docs/userguide.md             | 141 ++++++++++----------
 6 files changed, 425 insertions(+), 407 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/fc3cbb21/docs/clientcommands.md
----------------------------------------------------------------------
diff --git a/docs/clientcommands.md b/docs/clientcommands.md
index d87f629..adf378c 100644
--- a/docs/clientcommands.md
+++ b/docs/clientcommands.md
@@ -7,28 +7,29 @@ The most up-to-date reference is in the client itself: use the
 functionality. Note that `aurora help open` does not work, due to underlying issues with
 reflection.
 
-- [Aurora Client Commands](#aurora-client-commands)
-    - [Introduction](#introduction)
-    - [Cluster Configuration](#cluster-configuration)
-    - [Job Keys](#job-keys)
-    - [Modifying Aurora Client Commands](#modifying-aurora-client-commands)
-    - [Regular Jobs](#regular-jobs)
-        - [Creating and Running a Job](#creating-and-running-a-job)
-        - [Killing a Job](#killing-a-job)
-        - [Updating a Job](#updating-a-job)
-        - [Renaming a Job](#renaming-a-job)
-        - [Restarting Jobs](#restarting-jobs)
-    - [Cron Jobs](#cron-jobs)
-    - [Comparing Jobs](#comparing-jobs)
-    - [Viewing/Examining Jobs](#viewingexamining-jobs)
-        - [Listing Jobs](#listing-jobs)
-        - [Inspecting a Job](#inspecting-a-job)
-        - [Checking Your Quota](#checking-your-quota)
-        - [Finding a Job on Web UI](#finding-a-job-on-web-ui)
-        - [Getting Job Status](#getting-job-status)
-        - [Opening the Web UI](#opening-the-web-ui)
-        - [SSHing to a Specific Task Machine](#sshing-to-a-specific-task-machine)
-        - [Templating Command Arguments](#templating-command-arguments)
+- [Introduction](#introduction)
+- [Cluster Configuration](#cluster-configuration)
+- [Job Keys](#job-keys)
+- [Modifying Aurora Client Commands](#modifying-aurora-client-commands)
+- [Regular Jobs](#regular-jobs)
+    - [Creating and Running a Job](#creating-and-running-a-job)
+    - [Running a Command On a Running Job](#running-a-command-on-a-running-job)
+    - [Killing a Job](#killing-a-job)
+    - [Updating a Job](#updating-a-job)
+    - [Renaming a Job](#renaming-a-job)
+    - [Restarting Jobs](#restarting-jobs)
+- [Cron Jobs](#cron-jobs)
+- [Comparing Jobs](#comparing-jobs)
+- [Viewing/Examining Jobs](#viewingexamining-jobs)
+    - [Listing Jobs](#listing-jobs)
+    - [Inspecting a Job](#inspecting-a-job)
+    - [Versions](#versions)
+    - [Checking Your Quota](#checking-your-quota)
+    - [Finding a Job on Web UI](#finding-a-job-on-web-ui)
+    - [Getting Job Status](#getting-job-status)
+    - [Opening the Web UI](#opening-the-web-ui)
+    - [SSHing to a Specific Task Machine](#sshing-to-a-specific-task-machine)
+    - [Templating Command Arguments](#templating-command-arguments)
 
 Introduction
 ------------
@@ -69,18 +70,22 @@ environment variable is not set. The second is a user-installed file, located at
 A cluster configuration is formatted as JSON.  The simplest cluster configuration is one that
 communicates with a single (non-leader-elected) scheduler.  For example:
 
-    [{
-      "name": "example",
-      "scheduler_uri": "localhost:55555",
-    }]
+```javascript
+[{
+  "name": "example",
+  "scheduler_uri": "localhost:55555",
+}]
+```
 
 A configuration for a leader-elected scheduler would contain something like:
 
-    [{
-      "name": "example",
-      "zk": "192.168.33.2",
-      "scheduler_zk_path": "/aurora/scheduler"
-    }]
+```javascript
+[{
+  "name": "example",
+  "zk": "192.168.33.2",
+  "scheduler_zk_path": "/aurora/scheduler"
+}]
+```
 
 Job Keys
 --------
@@ -143,17 +148,17 @@ The configuration file may also contain and activate hook definitions.
 
 `create` can take four named parameters:
 
--   `-E NAME=VALUE` Bind a Thermos mustache variable name to a
-    value. Multiple flags specify multiple values. Defaults to `[]`.
--   ` -o, --open_browser` Open a browser window to the scheduler UI Job
-    page after a job changing operation happens. When `False`, the Job
-    URL prints on the console and the user has to copy/paste it
-    manually. Defaults to `False`. Does not work when running in Vagrant.
--   ` -j, --json` If specified, configuration argument is read as a
-    string in JSON format. Defaults to False.
--   ` --wait_until=STATE` Block the client until all the Tasks have
-    transitioned into the requested state. Possible values are: `PENDING`,
-    `RUNNING`, `FINISHED`. Default: `PENDING`
+- `-E NAME=VALUE` Bind a Thermos mustache variable name to a
+  value. Multiple flags specify multiple values. Defaults to `[]`.
+- ` -o, --open_browser` Open a browser window to the scheduler UI Job
+  page after a job changing operation happens. When `False`, the Job
+  URL prints on the console and the user has to copy/paste it
+  manually. Defaults to `False`. Does not work when running in Vagrant.
+- ` -j, --json` If specified, configuration argument is read as a
+  string in JSON format. Defaults to False.
+- ` --wait_until=STATE` Block the client until all the Tasks have
+  transitioned into the requested state. Possible values are: `PENDING`,
+  `RUNNING`, `FINISHED`. Default: `PENDING`
 
 ### Running a Command On a Running Job
 
@@ -168,14 +173,14 @@ namespaces.
 
 `run` can take three named parameters:
 
--   `-t NUM_THREADS`, `--threads=NUM_THREADS `The number of threads to
-    use, defaulting to `1`.
--   `--user=SSH_USER` ssh as this user instead of the given role value.
-    Defaults to None.
--   `-e, --executor_sandbox`  Run the command in the executor sandbox
-    instead of the Task sandbox. Defaults to False.
+- `-t NUM_THREADS`, `--threads=NUM_THREADS `The number of threads to
+  use, defaulting to `1`.
+- `--user=SSH_USER` ssh as this user instead of the given role value.
+  Defaults to None.
+- `-e, --executor_sandbox`  Run the command in the executor sandbox
+  instead of the Task sandbox. Defaults to False.
 
-### <a name="Killing"></a>Killing a Job
+### Killing a Job
 
     aurora kill <job key> <configuration file>
 
@@ -188,14 +193,14 @@ kill command.
 
 `kill` can take two named parameters:
 
--   `-o, --open_browser` Open a browser window to the scheduler UI Job
-    page after a job changing operation happens. When `False`, the Job
-    URL prints on the console and the user has to copy/paste it
-    manually. Defaults to `False`. Does not work when running in Vagrant.
--   `--shards=SHARDS` A list of shard ids to act on. Can either be a
-    comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
-    combination of the two (e.g. 0-2,5,7-9). Defaults to acting on all
-    shards.
+- `-o, --open_browser` Open a browser window to the scheduler UI Job
+  page after a job changing operation happens. When `False`, the Job
+  URL prints on the console and the user has to copy/paste it
+  manually. Defaults to `False`. Does not work when running in Vagrant.
+- `--shards=SHARDS` A list of shard ids to act on. Can either be a
+  comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
+  combination of the two (e.g. 0-2,5,7-9). Defaults to acting on all
+  shards.
 
 ### Updating a Job
 
@@ -225,17 +230,16 @@ used to define and activate hooks for `update`.
 
 `update` can take four named parameters:
 
--   `--shards=SHARDS` A list of shard ids to update. Can either be a
-    comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
-    combination of the two (e.g. 0-2,5,7-9). If not  set, all shards are
-    acted on. Defaults to None.
--   `-E NAME=VALUE` Binds a Thermos mustache variable name to a value.
-    Use multiple flags to specify multiple values. Defaults to `[]`.
--   `-j, --json` If specified, configuration is read in JSON format.
-    Defaults to `False`.
--   `--updater_health_check_interval_seconds=HEALTH_CHECK_INTERVAL_SECONDS`
-    Time interval between subsequent shard status checks. Defaults to
-    `3`.
+- `--shards=SHARDS` A list of shard ids to update. Can either be a
+  comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
+  combination of the two (e.g. 0-2,5,7-9). If not  set, all shards are
+  acted on. Defaults to None.
+- `-E NAME=VALUE` Binds a Thermos mustache variable name to a value.
+  Use multiple flags to specify multiple values. Defaults to `[]`.
+- `-j, --json` If specified, configuration is read in JSON format.
+  Defaults to `False`.
+- `--updater_health_check_interval_seconds=HEALTH_CHECK_INTERVAL_SECONDS`
+  Time interval between subsequent shard status checks. Defaults to `3`.
 
 ### Renaming a Job
 
@@ -293,30 +297,30 @@ if it contains hook definitions and activations that affect the
 In addition to the required job key argument, there are eight
 `restart` specific optional arguments:
 
--   `--updater_health_check_interval_seconds`: Defaults to `3`, the time
-    interval between subsequent shard status checks.
--   `--shards=SHARDS`: Defaults to None, which restarts all shards.
-    Otherwise, only the specified-by-id shards restart. They can be
-    comma-separated `(0, 8, 9)`, a range `(3-5)` or a
-    combination `(0, 3-5, 8, 9-11)`.
--   `--batch_size`: Defaults to `1`, the number of shards to be started
-    in one iteration. So, for example, for value 3, it tries to restart
-    the first three shards specified by `--shards` simultaneously, then
-    the next three, and so on.
--   `--max_per_shard_failures=MAX_PER_SHARD_FAILURES`: Defaults to `0`,
-    the maximum number of restarts per shard during restart. When
-    exceeded, it increments the total failure count.
--   `--max_total_failures=MAX_TOTAL_FAILURES`: Defaults to `0`, the
-    maximum total number of shard failures tolerated during restart.
--   `-o, --open_browser` Open a browser window to the scheduler UI Job
-    page after a job changing operation happens. When `False`, the Job
-    url prints on the console and the user has to copy/paste it
-    manually. Defaults to `False`. Does not work when running in Vagrant.
--   `--restart_threshold`: Defaults to `60`, the maximum number of
-    seconds before a shard must move into the `RUNNING` state before
-    it's considered a failure.
--   `--watch_secs`: Defaults to `30`, the minimum number of seconds a
-    shard must remain in `RUNNING` state before considered a success.
+- `--updater_health_check_interval_seconds`: Defaults to `3`, the time
+  interval between subsequent shard status checks.
+- `--shards=SHARDS`: Defaults to None, which restarts all shards.
+  Otherwise, only the specified-by-id shards restart. They can be
+  comma-separated `(0, 8, 9)`, a range `(3-5)` or a
+  combination `(0, 3-5, 8, 9-11)`.
+- `--batch_size`: Defaults to `1`, the number of shards to be started
+  in one iteration. So, for example, for value 3, it tries to restart
+  the first three shards specified by `--shards` simultaneously, then
+  the next three, and so on.
+- `--max_per_shard_failures=MAX_PER_SHARD_FAILURES`: Defaults to `0`,
+  the maximum number of restarts per shard during restart. When
+  exceeded, it increments the total failure count.
+- `--max_total_failures=MAX_TOTAL_FAILURES`: Defaults to `0`, the
+  maximum total number of shard failures tolerated during restart.
+- `-o, --open_browser` Open a browser window to the scheduler UI Job
+  page after a job changing operation happens. When `False`, the Job
+  url prints on the console and the user has to copy/paste it
+  manually. Defaults to `False`. Does not work when running in Vagrant.
+- `--restart_threshold`: Defaults to `60`, the maximum number of
+  seconds before a shard must move into the `RUNNING` state before
+  it's considered a failure.
+- `--watch_secs`: Defaults to `30`, the minimum number of seconds a
+  shard must remain in `RUNNING` state before considered a success.
 
 Cron Jobs
 ---------
@@ -337,11 +341,11 @@ is determined using `diff`, though you may choose an alternate
 
 There are two named parameters:
 
--   `-E NAME=VALUE` Bind a Thermos mustache variable name to a
-    value. Multiple flags may be used to specify multiple values.
-    Defaults to `[]`.
--   `-j, --json` Read the configuration argument in JSON format.
-    Defaults to `False`.
+- `-E NAME=VALUE` Bind a Thermos mustache variable name to a
+  value. Multiple flags may be used to specify multiple values.
+  Defaults to `[]`.
+- `-j, --json` Read the configuration argument in JSON format.
+  Defaults to `False`.
 
 Viewing/Examining Jobs
 ----------------------
@@ -358,10 +362,10 @@ Lists all Jobs registered with the Aurora scheduler in the named cluster for the
 
 It has two named parameters:
 
--   `--pretty`: Displays job information in prettyprinted format.
-    Defaults to `False`.
--   `-c`, `--show-cron`: Shows cron schedule for jobs. Defaults to
-    `False`. Do not use, as it's not yet implemented.
+- `--pretty`: Displays job information in prettyprinted format.
+  Defaults to `False`.
+- `-c`, `--show-cron`: Shows cron schedule for jobs. Defaults to
+  `False`. Do not use, as it's not yet implemented.
 
 ### Inspecting a Job
 
@@ -371,14 +375,14 @@ It has two named parameters:
 configuration file, and displays the parsed configuration. It has four
 named parameters:
 
--   `--local`: Inspect the configuration that the  `spawn` command would
-    create, defaulting to `False`.
--   `--raw`: Shows the raw configuration. Defaults to `False`.
--   `-j`, `--json`: If specified, configuration is read in JSON format.
-    Defaults to `False`.
--   `-E NAME=VALUE`: Bind a Thermos Mustache variable name to a value.
-    You can use multiple flags to specify multiple values. Defaults
-    to `[]`
+- `--local`: Inspect the configuration that the  `spawn` command would
+  create, defaulting to `False`.
+- `--raw`: Shows the raw configuration. Defaults to `False`.
+- `-j`, `--json`: If specified, configuration is read in JSON format.
+  Defaults to `False`.
+- `-E NAME=VALUE`: Bind a Thermos Mustache variable name to a value.
+  You can use multiple flags to specify multiple values. Defaults
+  to `[]`
 
 ### Versions
 
@@ -398,10 +402,10 @@ cluster.
 When you create a job, part of the output response contains a URL that goes
 to the job's scheduler UI page. For example:
 
-        vagrant@precise64:~$ aurora create example/www-data/prod/hello /vagrant/examples/jobs/hello_world.aurora
-        INFO] Creating job hello
-        INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
-        INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
+    vagrant@precise64:~$ aurora create example/www-data/prod/hello /vagrant/examples/jobs/hello_world.aurora
+    INFO] Creating job hello
+    INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
+    INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
 
 You can go to the scheduler UI page for this job via `http://precise64:8081/scheduler/www-data/prod/hello`
 You can go to the overall scheduler UI page by going to the part of that URL that ends at `scheduler`; `http://precise64:8081/scheduler`
@@ -444,12 +448,12 @@ particular machine.
 
 It can take three named parameters:
 
--     `-e`, `--executor_sandbox`:  Run `ssh` in the executor sandbox
-    instead of the  task sandbox. Defaults to `False`.
--   `--user=SSH_USER`: `ssh` as the given user instead of as the role in
-    the `job_key` argument. Defaults to none.
--   `-L PORT:NAME`: Add tunnel from local port `PORT` to the remote
-    named port  `NAME`. Defaults to `[]`.
+- `-e`, `--executor_sandbox`:  Run `ssh` in the executor sandbox
+  instead of the  task sandbox. Defaults to `False`.
+- `--user=SSH_USER`: `ssh` as the given user instead of as the role in
+  the `job_key` argument. Defaults to none.
+- `-L PORT:NAME`: Add tunnel from local port `PORT` to the remote
+  named port  `NAME`. Defaults to `[]`.
 
 ### Templating Command Arguments
 
@@ -458,12 +462,12 @@ It can take three named parameters:
 Given a job specification, run the supplied command on all hosts and
 return the output. You may use the standard Mustache templating rules:
 
--   {{`thermos.ports[name]`}} substitutes the specific named port of the
-    task assigned to this machine
--   {{`mesos.instance`}} substitutes the shard id of the job's task
-    assigned to this machine
--   {{`thermos.task_id`}} substitutes the task id of the job's task
-    assigned to this machine
+- `{{thermos.ports[name]}}` substitutes the specific named port of the
+  task assigned to this machine
+- `{{mesos.instance}}` substitutes the shard id of the job's task
+  assigned to this machine
+- `{{thermos.task_id}}` substitutes the task id of the job's task
+  assigned to this machine
 
 For example, the following type of pattern can be a powerful diagnostic
 tool:

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/fc3cbb21/docs/configurationtutorial.md
----------------------------------------------------------------------
diff --git a/docs/configurationtutorial.md b/docs/configurationtutorial.md
index 7fe8948..9b27fb6 100644
--- a/docs/configurationtutorial.md
+++ b/docs/configurationtutorial.md
@@ -36,8 +36,8 @@ You should read this after going through the general [Aurora Tutorial](tutorial.
 &nbsp;&nbsp;&nbsp;&nbsp;[Thermos Uses bash, But Thermos Is Not bash](#Bash)
 &nbsp;&nbsp;&nbsp;&nbsp;[Rarely Use Functions In Your Configurations](#Functions)
 
-<a name="Basic"></a>The Basics
-------------------------------
+The Basics
+----------
 
 To run a job on Aurora, you must specify a configuration file that tells
 Aurora what it needs to know to schedule the job, what Mesos needs to
@@ -49,14 +49,14 @@ A configuration file defines a collection of objects, along with parameter
 values for their attributes. An Aurora configuration file contains the
 following three types of objects:
 
--   `Job`
--   `Task`
--   `Process`
+- Job
+- Task
+- Process
 
 A configuration also specifies a list of `Job` objects assigned
 to the variable `jobs`.
 
--   `jobs` (list of defined Jobs to run)
+- jobs (list of defined Jobs to run)
 
 The `.aurora` file format is just Python. However, `Job`, `Task`,
 `Process`, and other classes are defined by a type-checked dictionary
@@ -73,7 +73,7 @@ file works like any other Python script.
 [*Aurora+Thermos Configuration Reference*](configurationreference.md)
 has a full reference of all Aurora/Thermos defined Pystachio objects.
 
-### <a name="Bottom"></a> Use Bottom-To-Top Object Ordering
+### Use Bottom-To-Top Object Ordering
 
 A well-structured configuration starts with structural templates (if
 any). Structural templates encapsulate in their attributes all the
@@ -90,8 +90,8 @@ instantiations are typically *UPPER\_SNAKE\_CASED*. `Process`, `Task`,
 and `Job` names are typically *lower\_snake\_cased*. Indentation is typically 2
 spaces.
 
-<a name="Example"></a>An Example Configuration File
----------------------------------------------------
+An Example Configuration File
+-----------------------------
 
 The following is a typical configuration file. Don't worry if there are
 parts you don't understand yet, but you may want to refer back to this
@@ -101,13 +101,12 @@ bound values for the variables.
 
     # --- templates here ---
 	class Profile(Struct):
-	  package_version =      Default(String, 'live')
-	  java_binary =          Default(String,
-	                            '/usr/lib/jvm/java-1.7.0-openjdk/bin/java')
-	  extra_jvm_options =   Default(String, '')
-	  parent_environment =   Default(String, 'prod')
-	  parent_serverset =     Default(String,
-	                           `/foocorp/service/bird/{{parent_environment}}/bird')
+	  package_version = Default(String, 'live')
+	  java_binary = Default(String, '/usr/lib/jvm/java-1.7.0-openjdk/bin/java')
+	  extra_jvm_options = Default(String, '')
+	  parent_environment = Default(String, 'prod')
+	  parent_serverset = Default(String,
+                                 '/foocorp/service/bird/{{parent_environment}}/bird')
 
 	# --- processes here ---
 	main = Process(
@@ -123,7 +122,7 @@ bound values for the variables.
 	  name = 'application',
 	  processes = [
 	    Process(
-	      name = 'fetch',variablesvv
+	      name = 'fetch',
 	      cmdline = 'curl -O
                   https://packages.foocorp.com/{{profile.package_version}}/application.jar'),
 	  ]
@@ -173,7 +172,7 @@ bound values for the variables.
 			.bind(profile = STAGING),
 	]
 
-## <a name="Process"></a> Defining Process Objects
+## Defining Process Objects
 
 Processes are handled by the Thermos system. A process is a single
 executable step run as a part of an Aurora task, which consists of a
@@ -281,7 +280,7 @@ if one isn't specified in the configuration:
     processes may depend upon other finalizing processes and will
     otherwise run as a typical process schedule.
 
-## <a name="Sandbox"></a>Getting Your Code Into The Sandbox
+## Getting Your Code Into The Sandbox
 
 When using Aurora, you need to get your executable code into its "sandbox", specifically
 the Task sandbox where the code executes for the Processes that make up that Task.
@@ -306,7 +305,7 @@ The template for this Process is:
 
 Note: Be sure the extracted code archive has an executable.
 
-## <a name="Task"></a> Defining Task Objects
+## Defining Task Objects
 
 Tasks are handled by Mesos. A task is a collection of processes that
 runs in a shared sandbox. It's the fundamental unit Aurora uses to
@@ -393,7 +392,7 @@ There are four optional Task attributes:
     `SIGKILL`s (or if dependent on yet to be completed Processes, are
     never invoked).
 
-### <a name="Sequential"></a> `SequentialTask`: Running Processes in Parallel or Sequentially
+### SequentialTask: Running Processes in Parallel or Sequentially
 
 By default, a Task with several Processes runs them in parallel. There
 are two ways to run Processes sequentially:
@@ -410,7 +409,7 @@ are two ways to run Processes sequentially:
 
         SequentialTask( ... processes=[process1, process2, process3] ...)
 
-### <a name="Simple"></a> `SimpleTask`
+### SimpleTask
 
 For quickly creating simple tasks, use the `SimpleTask` helper. It
 creates a basic task from a provided name and command line using a
@@ -438,7 +437,7 @@ The simplest idiomatic Job configuration thus becomes:
 When written to `hello_world.aurora`, you invoke it with a simple
 `aurora create cluster1/$USER/test/hello_world hello_world.aurora`.
 
-### <a name="Concat"></a> `Tasks.concat` and `Tasks.combine` (`concat_tasks` and `combine_tasks`)
+### Combining tasks
 
 `Tasks.concat`(synonym,`concat_tasks`) and
 `Tasks.combine`(synonym,`combine_tasks`) merge multiple Task definitions
@@ -514,7 +513,7 @@ the `start_application` Process implicitly relies
 upon `download_interpreter`). Make sure you understand the difference
 between using one or the other.
 
-## <a name="Job"></a> Defining `Job` Objects
+## Defining Job Objects
 
 A job is a group of identical tasks that Aurora can run in a Mesos cluster.
 
@@ -623,7 +622,7 @@ The final three Job attributes each take an object as their value.
     section in the Aurora + Thermos Reference manual on [Specifying
     Scheduling Constraints](configurationreference.md) for more information.
 
-## <a name="jobs"></a> Defining The `jobs` List
+## The jobs List
 
 At the end of your `.aurora` file, you need to specify a list of the
 file's defined Jobs to run in the order listed. For example, the
@@ -631,8 +630,8 @@ following runs first `job1`, then `job2`, then `job3`.
 
 jobs = [job1, job2, job3]
 
-<a name="Templating"></a>Templating
------------------------------------
+Templating
+----------
 
 The `.aurora` file format is just Python. However, `Job`, `Task`,
 `Process`, and other classes are defined by a templating library called
@@ -647,7 +646,7 @@ Reference* without `import` statements - the Aurora config loader
 injects them automatically. Other than that the `.aurora` format
 works like any other Python script.
 
-### <a name="Binding"></a>Templating 1: Binding in Pystachio
+### Templating 1: Binding in Pystachio
 
 Pystachio uses the visually distinctive {{}} to indicate template
 variables. These are often called "mustache variables" after the
@@ -737,7 +736,7 @@ You usually bind simple key to value pairs, but you can also bind three
 other objects: lists, dictionaries, and structurals. These will be
 described in detail later.
 
-### <a name="Structurals"></a> Structurals in Pystachio / Aurora
+### Structurals in Pystachio / Aurora
 
 Most Aurora/Thermos users don't ever (knowingly) interact with `String`,
 `Float`, or `Integer` Pystashio objects directly. Instead they interact
@@ -792,7 +791,7 @@ Similarly a Task level binding is available to that Task and its
 Processes but is *not* visible at the Job level (inheritance is a
 one-way street.)
 
-#### <a name="Mustaches"></a> Mustaches Within Structurals
+#### Mustaches Within Structurals
 
 When you define a `Struct` schema, one powerful, but confusing, feature
 is that all of that structure's attributes are Mustache variables within
@@ -825,9 +824,9 @@ Do not confuse Structural attributes with bound Mustache variables.
 Attributes are implicitly converted to Mustache variables but not vice
 versa.
 
-### <a name="Factories"></a>Templating 2: Structurals Are Factories
+### Templating 2: Structurals Are Factories
 
-#### <a name="Second"></a> A Second Way of Templating
+#### A Second Way of Templating
 
 A second templating method is both as powerful as the aforementioned and
 often confused with it. This method is due to automatic conversion of
@@ -856,14 +855,14 @@ Template creation is a common use for this technique:
     >>> logrotate = Daemon(name = 'logrotate', cmdline = './logrotate conf/logrotate.conf')
     >>> mysql = Daemon(name = 'mysql', cmdline = 'bin/mysqld --safe-mode')
 
-### <a name="AdvancedBinding"></a> Advanced Binding
+### Advanced Binding
 
 As described above, `.bind()` binds simple strings or numbers to
 Mustache variables. In addition to Structural types formed by combining
 atomic types, Pystachio has two container types; `List` and `Map` which
 can also be bound via `.bind()`.
 
-#### <a name="BindSyntax"></a> Bind Syntax
+#### Bind Syntax
 
 The `bind()` function can take Python dictionaries or `kwargs`
 interchangeably (when "`kwargs`" is in a function definition, `kwargs`
@@ -885,7 +884,7 @@ Bindings done "closer" to the object in question take precedence:
               min_duration=5)
     ))
 
-#### <a name="ComplexObjects"></a>Binding Complex Objects
+#### Binding Complex Objects
 
 ##### Lists
 
@@ -904,7 +903,7 @@ Bindings done "closer" to the object in question take precedence:
     >>> String('{{p.cmdline}}').bind(p = Process(cmdline = "echo hello world"))
     String(echo hello world)
 
-### <a name="StructuralBinding"></a> Structural Binding
+### Structural Binding
 
 Use structural templates when binding more than two or three individual
 values at the Job or Task level. For fewer than two or three, standard
@@ -923,7 +922,9 @@ or change the way the dataset is designated.
       dataset = Default(String, hdfs://home/aurora/data/{{environment}}')
 
     PRODUCTION = Profile(version = 'live', environment = 'prod')
-    DEVEL = Profile(version = 'latest', environment = 'devel', dataset = 'hdfs://home/aurora/data/test')
+    DEVEL = Profile(version = 'latest',
+                    environment = 'devel',
+                    dataset = 'hdfs://home/aurora/data/test')
     TEST = Profile(version = 'latest', environment = 'test')
 
     JOB_TEMPLATE = Job(
@@ -956,10 +957,10 @@ So rather than a `.bind()` with a half-dozen substituted variables, you
 can bind a single object that has sensible defaults stored in a single
 place.
 
-<a name="Tips"></a> Configuration File Writing Tips And Best Practices
-----------------------------------------------------------------------
+Configuration File Writing Tips And Best Practices
+--------------------------------------------------
 
-### <a name="Few"></a> Use As Few `.aurora` Files As Possible
+### Use As Few .aurora Files As Possible
 
 When creating your `.aurora` configuration, try to keep all versions of
 a particular job within the same `.aurora` file. For example, if you
@@ -970,7 +971,7 @@ Constructs shared across multiple jobs owned by your team (e.g.
 team-level defaults or structural templates) can be split into separate
 `.aurora`files and included via the `include` directive.
 
-### <a name="Boilerplate"></a> Avoid Boilerplate
+### Avoid Boilerplate
 
 If you see repetition or find yourself copy and pasting any parts of
 your configuration, it's likely an opportunity for templating. Take the
@@ -1047,7 +1048,7 @@ section.
       name = 'build_python',
       processes = [download, unpack, build, email]).bind(python = Python(version = "2.7.3"))
 
-### <a name="Bash"></a>Thermos Uses bash, But Thermos Is Not bash
+### Thermos Uses bash, But Thermos Is Not bash
 
 #### Bad
 
@@ -1090,13 +1091,13 @@ linear ordering constraint for you.
 
     stage = Process(
       name = 'stage',
-      cmdline = 'cmdline = 'rcp user@my_machine:my_application . && ' 'unzip app.zip && rm -f app.zip')
+      cmdline = 'rcp user@my_machine:my_application . && unzip app.zip && rm -f app.zip')
 
-      run = Process(name = 'app', cmdline = 'java -jar app.jar')
+    run = Process(name = 'app', cmdline = 'java -jar app.jar')
 
-      run_task = SequentialTask(processes = [stage, run])
+    run_task = SequentialTask(processes = [stage, run])
 
-### <a name="Functions"></a> Rarely Use Functions In Your Configurations
+### Rarely Use Functions In Your Configurations
 
 90% of the time you define a function in a `.aurora` file, you're
 probably Doing It Wrong(TM).
@@ -1136,7 +1137,3 @@ configuration.
       name = 'task_two',
       resources = Resources(cpu = 2.0, ram = 64*MB, disk = 1*GB)
     )
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/fc3cbb21/docs/hooks.md
----------------------------------------------------------------------
diff --git a/docs/hooks.md b/docs/hooks.md
index 8bc00c8..77fb955 100644
--- a/docs/hooks.md
+++ b/docs/hooks.md
@@ -1,20 +1,20 @@
 # Hooks for Aurora Client API
 
-[Introduction](#Introduction)
-[Hook Types](#HookTypes)
-[Execution Order](#ExecutionOrder)
-[Hookable Methods](#HookableMethods)
-[Activating and Using Hooks](#ActivatingUsingHooks)
-[`.aurora` Config File Settings](#auroraConfigFileSettings)
-[Command Line](#CommandLine)
-[Hooks Protocol](#HooksProtocol)
-&nbsp;&nbsp;&nbsp;&nbsp;[`pre_` Methods](#preMethods)
-&nbsp;&nbsp;&nbsp;&nbsp;[`err_` Methods](#errMethods)
-&nbsp;&nbsp;&nbsp;&nbsp;[`post_` Methods](#postMethods)
-[Generic Hooks](#GenericHooks)
-[Hooks Process Checklist](#HooksProcessChecklist)
-
-##<a name="Introduction"></a>Introduction
+- [Introduction](#introduction)
+- [Hook Types](#hook-types)
+- [Execution Order](#execution-order)
+- [Hookable Methods](#hookable-methods)
+- [Activating and Using Hooks](#activating-and-using-hooks)
+- [.aurora Config File Settings](#aurora-config-file-settings)
+- [Command Line](#command-line)
+- [Hooks Protocol](#hooks-protocol)
+  - [pre_ Methods](#pre_-methods)
+  - [err_ Methods](#err_-methods)
+  - [post_ Methods](#post_-methods)
+- [Generic Hooks](#generic-hooks)
+- [Hooks Process Checklist](#hooks-process-checklist)
+
+## Introduction
 
 You can execute hook methods around Aurora API Client methods when they are called by the Aurora Command Line commands.
 
@@ -24,7 +24,7 @@ The catch is that hooks are associated with Aurora Client API methods, which use
 
 **Terminology Note**: From now on, "method(s)" refer to Client API methods, and "command(s)" refer to Command Line commands.
 
-##<a name="HookTypes"></a>Hook Types
+## Hook Types
 
 Hooks have three basic types, differing by when they run with respect to their associated method.
 
@@ -36,23 +36,25 @@ consider whether or not to error-trap them. You can error trap at the
 highest level very generally and always pass the `pre_` hook by
 returning `True`. For example:
 
-    def pre_create(...):
-        do_something()  # if do_something fails with an exception, the create_job is not attempted!
-        return True
+```python
+def pre_create(...):
+  do_something()  # if do_something fails with an exception, the create_job is not attempted!
+  return True
 
-    # However...
-    def pre_create(...):
-        try:
-          do_something()  # may cause exception
-        except Exception:  # generic error trap will catch it
-          pass  # and ignore the exception
-        return True  # create_job will run in any case!
+# However...
+def pre_create(...):
+  try:
+    do_something()  # may cause exception
+  except Exception:  # generic error trap will catch it
+    pass  # and ignore the exception
+  return True  # create_job will run in any case!
+```
 
 `post_<method_name>`: A `post_` hook executes after its associated method successfully finishes running. If it fails, the already executed method is unaffected. A `post_` hook's error is trapped, and any later operations are unaffected.
 
 `err_<method_name>`: Executes only when its associated method returns a status other than OK or throws an exception. If an `err_` hook fails, the already executed method is unaffected. An `err_` hook's error is trapped, and any later operations are unaffected.
 
-##<a name="ExecutionOrder"></a>Execution Order
+## Execution Order
 
 A command with `pre_`, `post_`, and `err_` hooks defined and activated for its called method executes in the following order when the method successfully executes:
 
@@ -78,7 +80,7 @@ The following is what happens when, for the same command and hooks, the method a
 
 Note that the `post_` and `err_` hooks for the same method can never both run for a single execution of that method.
 
-##<a name="HookableMethods"></a>Hookable Methods
+## Hookable Methods
 
 You can associate `pre_`, `post_`, and `err_` hooks with the following methods. Since you do not directly interact with the methods, but rather the Aurora Command Line commands that call them, for each method we also list the command(s) that can call the method. Note that a different method or methods may be called by a command depending on how the command's other code executes. Similarly, multiple commands can call the same method. We also list the methods' argument signatures, which are used by their associated hooks. <a name="Chart"></a>
 
@@ -124,17 +126,15 @@ Some specific examples:
 
 * `err_kill_job` executes when the `kill_job` method is called, but doesn't successfully finish running.
 
-##<a name="ActivatingUsingHooks"></a>Activating and Using Hooks
+## Activating and Using Hooks
 
 By default, hooks are inactive. If you do not want to use hooks, you do not need to make any changes to your code. If you do want to use hooks, you will need to alter your `.aurora` config file to activate them both for the configuration as a whole as well as for individual `Job`s. And, of course, you will need to define in your config file what happens when a particular hook executes.
 
-##<a name="auroraConfigFileSettings"></a>`.aurora` Config File Settings
+## .aurora Config File Settings
 
 You can define a top-level `hooks` variable in any `.aurora` config file. `hooks` is a list of all objects that define hooks used by `Job`s defined in that config file. If you do not want to define any hooks for a configuration, `hooks` is optional.
 
-    hooks = [  Object_with_defined_hooks1,
-               Object_with_defined_hooks2
-    ]
+    hooks = [Object_with_defined_hooks1, Object_with_defined_hooks2]
 
 Be careful when assembling a config file using `include` on multiple smaller config files. If there are multiple files that assign a value to `hooks`, only the last assignment made will stick. For example, if `x.aurora` has `hooks = [a, b, c]` and `y.aurora` has `hooks = [d, e, f]` and `z.aurora` has, in this order, `include x.aurora` and `include y.aurora`, the `hooks` value will be `[d, e, f]`.
 
@@ -144,16 +144,16 @@ To summarize, to use hooks for a particular job, you must both activate hooks fo
 
 Recall that `.aurora` config files are written in Pystachio. So the following turns on hooks for production jobs at cluster1 and cluster2, but leaves them off for similar jobs with a defined user role. Of course, you also need to list the objects that define the hooks in your config file's `hooks` variable.
 
-    jobs = [
-            Job(enable_hooks = True, cluster = c, env = 'prod')
-            for c in ('cluster1', 'cluster2')
-           ]
-    jobs.extend(
-       Job(cluster = c, env = 'prod', role = getpass.getuser())
-       for c in ('cluster1', 'cluster2')
-       # Hooks disabled for these jobs
+```python
+jobs = [
+        Job(enable_hooks = True, cluster = c, env = 'prod') for c in ('cluster1', 'cluster2')
+       ]
+jobs.extend(
+   Job(cluster = c, env = 'prod', role = getpass.getuser()) for c in ('cluster1', 'cluster2'))
+   # Hooks disabled for these jobs
+```
 
-##<a name="CommandLine"></a>Command Line
+## Command Line
 
 All Aurora Command Line commands now accept an `.aurora` config file as an optional parameter (some, of course, accept it as a required parameter). Whenever a command has a `.aurora` file parameter, any hooks specified and activated in the `.aurora` file can be used. For example:
 
@@ -163,7 +163,7 @@ The command activates any hooks specified and activated in `myapp.aurora`. For t
 
     aurora restart cluster1/role/env/app
 
-##<a name="HooksProtocol"></a>Hooks Protocol
+## Hooks Protocol
 
 Any object defined in the `.aurora` config file can define hook methods. You should define your hook methods within a class, and then use the class name as a value in the `hooks` list in your config file.
 
@@ -171,17 +171,19 @@ Note that you can define other methods in the class that its hook methods can ca
 
 The following example defines a class containing a `pre_kill_job` hook definition that calls another method defined in the class.
 
-    # An example of a hook that defines a method pre_kill_job
-    class KillConfirmer(object):
-        def confirm(self, msg):
-            return True if raw_input(msg).lower() == 'yes' else False
+```python
+# Defines a method pre_kill_job
+class KillConfirmer(object):
+  def confirm(self, msg):
+    return raw_input(msg).lower() == 'yes'
 
-        def pre_kill_job(self, job_key, shards=None):
-            shards = ('shards %s' % shards) if shards is not None else 'all shards'
-            return self.confirm('Are you sure you want to kill %s (%s)? (yes/no): '
-                                % (job_key, shards))
+  def pre_kill_job(self, job_key, shards=None):
+    shards = ('shards %s' % shards) if shards is not None else 'all shards'
+    return self.confirm('Are you sure you want to kill %s (%s)? (yes/no): '
+                        % (job_key, shards))
+```
 
-###<a name="preMethods"></a>`pre_` Methods
+### pre_ Methods
 
 `pre_` methods have the signature:
 
@@ -191,7 +193,7 @@ The following example defines a class containing a `pre_kill_job` hook definitio
 
 If this method returns False, the API command call aborts.
 
-###<a name="errMethods"></a>err_ Methods
+### err_ Methods
 
 `err_` methods have the signature:
 
@@ -201,21 +203,23 @@ If this method returns False, the API command call aborts.
 
 `err_` method return codes are ignored.
 
-###<a name="postMethods"></a>`post_` Methods
+### post_ Methods
 
 `post_` methods have the signature:
 
-    post_<API method name>(self, result, <associated method's signature>)
+    post_<API method name>(self, result, <associated method signature>)
 
 `post_` method parameters are `self`, then `result`, followed by the same parameter signature as their associated method. `result` is the result of the associated method call. See the [chart](#chart) above for the mapping of parameters to methods. When writing `post_` methods, you can use the `*` and `**` syntax to designate that all unspecified arguments are passed in a list to the `*`ed parameter and all unspecified named arguments with values are passed as name/value pairs to the `**`ed parameter.
 
 `post_` method return codes are ignored.
 
-##<a name="GenericHooks"></a>Generic Hooks
+## Generic Hooks
 
 There are five Aurora API Methods which any of the three hook types can attach to. Thus, there are 15 possible hook/method combinations for a single `.aurora` config file. Say that you define `pre_` and `post_` hooks for the `restart` method. That leaves 13 undefined hook/method combinations; `err_restart` and the 3 `pre_`, `post_`, and `err_` hooks for each of the other 4 hookable methods. You can define what happens when any of these otherwise undefined 13 hooks execute via a generic hook, whose signature is:
 
-    generic_hook(self, hook_config, event, method_name, result_or_err, args*, kw**)
+```python
+generic_hook(self, hook_config, event, method_name, result_or_err, args*, kw**)
+```
 
 where:
 
@@ -233,27 +237,29 @@ where:
 
 Example:
 
-    # An example of a hook that overrides the standard do-nothing generic_hook
-    # by adding a log writing operation.
+```python
+# Overrides the standard do-nothing generic_hook by adding a log writing operation.
+from twitter.common import log
+  class Logger(object):
+    '''Adds to the log every time a hookable API method is called'''
+    def generic_hook(self, hook_config, event, method_name, result_or_err, *args, **kw)
+      log.info('%s: %s_%s of %s'
+               % (self.__class__.__name__, event, method_name, hook_config.job_key))
+```
 
-    from twitter.common import log
-      class Logger(object):
-        '''Just adds to the log every time a hookable API method is called'''
-        def generic_hook(self, hook_config, event, method_name, result_or_err, *args, **kw)
-          log.info('%s: %s_%s of %s' % (self.__class__.__name__, event, method_name, hook_config.job_key))
-
-##<a name="HooksProcessChecklist"></a>Hooks Process Checklist
+## Hooks Process Checklist
 
 1. In your `.aurora` config file, add a `hooks` variable. Note that you may want to define a `.aurora` file only for hook definitions and then include this file in multiple other config files that you want to use the same hooks.
 
-    `hooks = [ ]`
+```python
+hooks = []
+```
 
 2. In the `hooks` variable, list all objects that define hooks used by `Job`s defined in this config:
 
-         hooks = [ Object_hook_definer1,
-                   Object_hook_definer2
-                 ]
-
+```python
+hooks = [Object_hook_definer1, Object_hook_definer2]
+```
 
 3. For each job that uses hooks in this config file, add `enable_hooks = True` to the `Job` definition. Note that this is necessary even if you only want to use the generic hook.
 

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/fc3cbb21/docs/resourceisolation.md
----------------------------------------------------------------------
diff --git a/docs/resourceisolation.md b/docs/resourceisolation.md
index 0e3335d..7e8d88d 100644
--- a/docs/resourceisolation.md
+++ b/docs/resourceisolation.md
@@ -5,25 +5,23 @@ Resource Isolation and Sizing
 Both user-facing aspects and how it works under the hood are subject to
 change.
 
-[Introduction](#Introduction)
-[CPU Isolation](#CPUisolation)
-[CPU Sizing](#CPUsizing)
-[Memory Isolation](#MemoryIsolation)
-[Memory Sizing](#MemorySizing)
-[Disk Space](#DiskSpace)
-[Disk Space Sizing](#DiskSpaceSizing)
-[Other Resources](#OtherResources)
+- [Introduction](#introduction)
+- [CPU Isolation](#cpu-isolation)
+- [CPU Sizing](#cpu-sizing)
+- [Memory Isolation](#memory-isolation)
+- [Memory Sizing](#memory-sizing)
+- [Disk Space](#disk-space)
+- [Disk Space Sizing](#disk-space-sizing)
+- [Other Resources](#other-resources)
 
-## <a name="Introduction<> Introduction
+## Introduction
 
 Aurora is a multi-tenant system; a single software instance runs on a
 server, serving multiple clients/tenants. To share resources among
 tenants, it implements isolation of:
 
 * CPU
-
 * memory
-
 * disk space
 
 CPU is a soft limit, and handled differently from memory and disk space.
@@ -33,7 +31,7 @@ application goes over these values, it's killed.
 
 Let's look at each resource type in more detail:
 
-## <a name="CPUisolation"></a> CPU Isolation
+## CPU Isolation
 
 Mesos uses a quota based CPU scheduler (the *Completely Fair Scheduler*)
 to provide consistent and predictable performance.  This is effectively
@@ -71,7 +69,7 @@ delay service of requests.
 *Technical Note*: Mesos considers logical cores, also known as
 hyperthreading or SMT cores, as the unit of CPU.
 
-## <a name="CPUsizing"></a> CPU Sizing
+## CPU Sizing
 
 To correctly size Aurora-run Mesos tasks, specify a per-shard CPU value
 that lets the task run at its desired performance when at peak load
@@ -83,7 +81,7 @@ your application, observe its CPU stats over time. If consistently at or
 near your quota during peak load, you should consider increasing either
 per-shard CPU or the number of shards.
 
-## <a name="MemoryIsolation"></a> Memory Isolation
+## Memory Isolation
 
 Mesos uses dedicated memory allocation. Your application always has
 access to the amount of memory specified in your configuration. The
@@ -100,7 +98,7 @@ working.
 so your application can request more than its allocation without getting
 an ENOMEM. However, it will be killed shortly after.
 
-## <a name="MemorySizing"></a> Memory Sizing
+## Memory Sizing
 
 Size for your application's peak requirement. Observe the per-instance
 memory statistics over time, as memory requirements can vary over
@@ -109,7 +107,7 @@ value, it will be killed, so you should also add a safety margin of
 around 10-20%. If you have the ability to do so, you may also want to
 put alerts on the per-instance memory.
 
-## <a name="DiskSpace"></a> Disk Space
+## Disk Space
 
 Disk space used by your application is defined as the sum of the files'
 disk space in your application's directory, including the `stdout` and
@@ -133,7 +131,7 @@ are still available but you shouldn't count on them being so.
 application can write above its quota without getting an ENOSPC, but it
 will be killed shortly after. This is subject to change.
 
-## Disk Space Sizing<a name="DiskSpaceSizing"></a>
+## Disk Space Sizing
 
 Size for your application's peak requirement. Rotate and discard log
 files as needed to stay within your quota. When running a Java process,
@@ -141,7 +139,7 @@ add the maximum size of the Java heap to your disk space requirement, in
 order to account for an out of memory error dumping the heap
 into the application's sandbox space.
 
-## <a name="OtherResources"></a> Other Resources
+## Other Resources
 
 Other resources, such as network bandwidth, do not have any performance
 guarantees. For some resources, such as memory bandwidth, there are no

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/fc3cbb21/docs/tutorial.md
----------------------------------------------------------------------
diff --git a/docs/tutorial.md b/docs/tutorial.md
index ec60e70..3ccc862 100644
--- a/docs/tutorial.md
+++ b/docs/tutorial.md
@@ -4,25 +4,25 @@ Aurora Tutorial
 Before reading this document, you should read over the (short) [README](README.md)
 for the Aurora docs.
 
--   [Introduction](#Introduction)
--   [Setup: Install Aurora](#Setup)
--   [The Script](#Script)
--   [Aurora Configuration](#Configuration)
--   [What's Going On In That Configuration File?](#What)
--   [Creating the Job](#Creating)
--   [Watching the Job Run](#Watching)
--   [Cleanup](#Cleanup)
--   [Next Steps](#Next)
-
-## <a name="Introduction"></a>Introduction
+- [Introduction](#introduction)
+- [Setup: Install Aurora](#setup-install-aurora)
+- [The Script](#the-script)
+- [Aurora Configuration](#aurora-configuration)
+- [What's Going On In That Configuration File?](#whats-going-on-in-that-configuration-file)
+- [Creating the Job](#creating-the-job)
+- [Watching the Job Run](#watching-the-job-run)
+- [Cleanup](#cleanup)
+- [Next Steps](#next-steps)
+
+## Introduction
 
 This tutorial shows how to use the Aurora scheduler to run (and
 "`printf-debug`") a hello world program on Mesos. The operational
 hierarchy is:
 
--   *Aurora manages and schedules jobs for Mesos to run.*
--   *Mesos manages the individual tasks that make up a job.*
--   *Thermos manages the individual processes that make up a task.*
+- Aurora manages and schedules jobs for Mesos to run.
+- Mesos manages the individual tasks that make up a job.
+- Thermos manages the individual processes that make up a task.
 
 This is the recommended first Aurora users document to read to start
 getting up to speed on the system.
@@ -30,13 +30,13 @@ getting up to speed on the system.
 To get help, email questions to the Aurora Developer List,
 [dev@aurora.incubator.apache.org](mailto:dev@aurora.incubator.apache.org)
 
-## <a name="Setup"></a>Setup: Install Aurora
+## Setup: Install Aurora
 
 You use the Aurora client and web UI to interact with Aurora jobs. To
 install it locally, see [vagrant.md](vagrant.md). The remainder of this
 Tutorial assumes you are running Aurora using Vagrant.
 
-## <a name="Script"></a>The Script
+## The Script
 
 Our "hello world" application is a simple Python script that loops
 forever, displaying the time every few seconds. Copy the code below and
@@ -45,57 +45,61 @@ this directory is the same as `/vagrant` inside the Vagrant VMs).
 
 The script has an intentional bug, which we will explain later on.
 
-    import sys
-    import time
+```python
+import sys
+import time
 
-    def main(argv):
-      SLEEP_DELAY = 10
-      # Python ninjas - ignore this blatant bug.
-      for i in xrang(100):
-        print("Hello world! The time is now: %s. Sleeping for %d secs" % (
-          time.asctime(), SLEEP_DELAY))
-        sys.stdout.flush()
-        time.sleep(SLEEP_DELAY)
+def main(argv):
+  SLEEP_DELAY = 10
+  # Python ninjas - ignore this blatant bug.
+  for i in xrang(100):
+    print("Hello world! The time is now: %s. Sleeping for %d secs" % (
+      time.asctime(), SLEEP_DELAY))
+    sys.stdout.flush()
+    time.sleep(SLEEP_DELAY)
 
-    if __name__ == "__main__":
-      main(sys.argv)
+if __name__ == "__main__":
+  main(sys.argv)
+```
 
-## <a name="Configuration"></a>Aurora Configuration
+## Aurora Configuration
 
 Once we have our script/program, we need to create a *configuration
 file* that tells Aurora how to manage and launch our Job. Save the below
 code in the file `hello_world.aurora` in the same directory as your
-`hello_world.py' file. (all Aurora configuration files end with `.aurora` and
+`hello_world.py` file. (all Aurora configuration files end with `.aurora` and
 are written in a Python variant).
 
-    import os
+```python
+import os
 
-    # copy hello_world.py into the local sandbox
-    install = Process(
-      name = 'fetch_package',
-      cmdline = 'cp /vagrant/hello_world.py . && chmod +x hello_world.py')
+# copy hello_world.py into the local sandbox
+install = Process(
+  name = 'fetch_package',
+  cmdline = 'cp /vagrant/hello_world.py . && chmod +x hello_world.py')
 
-    # run the script
-    hello_world = Process(
-      name = 'hello_world',
-      cmdline = 'python hello_world.py')
+# run the script
+hello_world = Process(
+  name = 'hello_world',
+  cmdline = 'python hello_world.py')
 
-    # describe the task
-    hello_world_task = SequentialTask(
-      processes = [install, hello_world],
-      resources = Resources(cpu = 1, ram = 1*MB, disk=8*MB))
+# describe the task
+hello_world_task = SequentialTask(
+  processes = [install, hello_world],
+  resources = Resources(cpu = 1, ram = 1*MB, disk=8*MB))
 
-    jobs = [
-      Job(name = 'hello_world', cluster = 'example', role = 'www-data',
-          environment = 'devel', task = hello_world_task)
-    ]
+jobs = [
+  Job(name = 'hello_world', cluster = 'example', role = 'www-data',
+      environment = 'devel', task = hello_world_task)
+]
+```
 
 For more about Aurora configuration files, see the [Configuration
 Tutorial](configurationtutorial.md) and the [Aurora + Thermos
 Reference](configurationreference.md) (preferably after finishing this
 tutorial).
 
-## <a name="What"></a>What's Going On In That Configuration File?
+## What's Going On In That Configuration File?
 
 More than you might think.
 
@@ -111,7 +115,7 @@ specify more than one Job in a config file.
 local sandbox in which it will run. It then specifies how the code is
 actually run once the second Process starts.
 
-## <a name="Creating"></a>Creating the Job
+## Creating the Job
 
 We're ready to launch our job! To do so, we use the Aurora Client to
 issue a Job creation request to the Aurora scheduler.
@@ -136,21 +140,24 @@ Followed by:
 
 You'll see something like:
 
-    [{
-      "name": "example",
-      "zk": "192.168.33.2",
-      "scheduler_zk_path": "/aurora/scheduler",
-      "auth_mechanism": "UNAUTHENTICATED"
-    }]
+```javascript
+[{
+  "name": "example",
+  "zk": "192.168.33.2",
+  "scheduler_zk_path": "/aurora/scheduler",
+  "auth_mechanism": "UNAUTHENTICATED"
+}]
+```
 
-Use a `"name"` value for your job key's cluster value.
+Use a `name` value for your job key's cluster value.
 
 Role names are user accounts existing on the slave machines. If you don't know what accounts
 are available, contact your sysadmin.
 
 Environment names are namespaces; you can count on `prod`, `devel` and `test` existing.
 
-The Aurora Client command that actually runs our Job is `aurora create`. It creates a Job as specified by its job key and configuration file arguments and runs it.
+The Aurora Client command that actually runs our Job is `aurora create`. It creates a Job as
+specified by its job key and configuration file arguments and runs it.
 
     aurora create <cluster>/<role>/<environment>/<jobname> <config_file>
 
@@ -163,19 +170,20 @@ of its code file.
 
 This returns:
 
-    [tw-mbp13 incubator-aurora (master)]$ vagrant ssh aurora-scheduler
+    $ vagrant ssh aurora-scheduler
     Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic x86_64)
 
      * Documentation:  https://help.ubuntu.com/
     Welcome to your Vagrant-built virtual machine.
     Last login: Fri Jan  3 02:18:55 2014 from 10.0.2.2
-    vagrant@precise64:~$ aurora create example/www-data/devel/hello_world /vagrant/hello_world.aurora
+    vagrant@precise64:~$ aurora create example/www-data/devel/hello_world \
+        /vagrant/hello_world.aurora
      INFO] Creating job hello_world
-     INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/devel/hello_world)
+     INFO] Response from scheduler: OK (message: 1 new tasks pending for job
+      www-data/devel/hello_world)
      INFO] Job url: http://precise64:8081/scheduler/www-data/devel/hello_world
-    vagrant@precise64:~$
 
-## <a name="Watching"></a>Watching the Job Run
+## Watching the Job Run
 
 Now that our job is running, let's see what it's doing. Access the
 scheduler web interface at `http://$scheduler_hostname:$scheduler_port/scheduler`
@@ -184,7 +192,8 @@ First we see what Jobs are scheduled:
 
 ![Scheduled Jobs](images/ScheduledJobs.png)
 
-Click on your user name, which in this case was `www-data`, and we see the Jobs associated with that role:
+Click on your user name, which in this case was `www-data`, and we see the Jobs associated
+with that role:
 
 ![Role Jobs](images/RoleJobs.png)
 
@@ -208,11 +217,12 @@ to `stderr` on the failed `hello_world` process, we see what happened.
 ![stderr page](images/stderr.png)
 
 It looks like we made a typo in our Python script. We wanted `xrange`,
-not `xrang`. Edit the `hello_world.py` script, save as `hello_world_v2.py` and change your `hello_world.aurora` config file to use `hello_world_v2.py` instead of `hello_world.py`.
+not `xrang`. Edit the `hello_world.py` script, save as `hello_world_v2.py` and change your
+`hello_world.aurora` config file to use `hello_world_v2.py` instead of `hello_world.py`.
 
 Now that we've updated our configuration, let's restart the job:
 
-    $ aurora update example/www-data/devel/hello_world /vagrant/hello_world.aurora
+    aurora update example/www-data/devel/hello_world /vagrant/hello_world.aurora
 
 This time, the task comes up, we inspect the page, and see that the
 `hello_world` process is running.
@@ -224,7 +234,7 @@ output:
 
 ![stdout page](images/stdout.png)
 
-## <a name="Cleanup"></a>Cleanup
+## Cleanup
 
 Now that we're done, we kill the job using the Aurora client:
 
@@ -238,14 +248,14 @@ The Task scheduler page now shows the `hello_world` process as `KILLED`.
 
 ![Killed Task page](images/killedtask.png)
 
-## <a name="Next"></a>Next Steps
+## Next Steps
 
 Now that you've finished this Tutorial, you should read or do the following:
 
--   [The Aurora Configuration Tutorial](configurationtutorial.md), which provides more examples
-    and best practices for writing Aurora configurations. You should also look at
-    the [Aurora + Thermos Configuration Reference](configurationreference.md).
--   The [Aurora User Guide](userguide.md) provides an overview of how Aurora, Mesos, and
-    Thermos work "under the hood".
--   Explore the Aurora Client - use the `aurora help` subcommand, and read the
-    [Aurora Client Commands](clientcommands.md) document.
+- [The Aurora Configuration Tutorial](configurationtutorial.md), which provides more examples
+  and best practices for writing Aurora configurations. You should also look at
+  the [Aurora + Thermos Configuration Reference](configurationreference.md).
+- The [Aurora User Guide](userguide.md) provides an overview of how Aurora, Mesos, and
+  Thermos work "under the hood".
+- Explore the Aurora Client - use the `aurora help` subcommand, and read the
+  [Aurora Client Commands](clientcommands.md) document.

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/fc3cbb21/docs/userguide.md
----------------------------------------------------------------------
diff --git a/docs/userguide.md b/docs/userguide.md
index 2e5af23..2fbafb5 100644
--- a/docs/userguide.md
+++ b/docs/userguide.md
@@ -1,19 +1,20 @@
 Aurora User Guide
 -----------------
-[Overview](#Overview)
-[Aurora Job Lifecycle](#Lifecycle)
-&nbsp;&nbsp;&nbsp;&nbsp;[Life Of A Task](#Life)
-&nbsp;&nbsp;&nbsp;&nbsp;[`PENDING` to `RUNNING` states](#Pending)
-&nbsp;&nbsp;&nbsp;&nbsp;[Task Updates](#Updates)
-&nbsp;&nbsp;&nbsp;&nbsp;[Giving Priority to Production Tasks: `PREEMPTING`](#Giving)
-&nbsp;&nbsp;&nbsp;&nbsp;[Natural Termination: `FINISHED`, `FAILED`](#Natural)
-&nbsp;&nbsp;&nbsp;&nbsp;[Forceful Termination: `KILLING`, `RESTARTING`](#Forceful)
-[Configuration](#Configuration)
-[Creating Aurora Jobs](#Creating)
-[Interacting With Aurora Jobs](#Interacting)
-
-<a name="Overview"></a>Overview
--------------------------------
+
+- [Overview](#overview)
+- [Job Lifecycle](#job-lifecycle)
+  - [Life Of A Task](#life-of-a-task)
+  - [PENDING to RUNNING states](#pending-to-running-states)
+  - [Task Updates](#task-updates)
+  - [Giving Priority to Production Tasks: PREEMPTING](#giving-priority-to-production-tasks-preempting)
+  - [Natural Termination: FINISHED, FAILED](#natural-termination-finished-failed)
+  - [Forceful Termination: KILLING, RESTARTING](#forceful-termination-killing-restarting)
+- [Configuration](#configuration)
+- [Creating Jobs](#creating-jobs)
+- [Interacting With Jobs](#interacting-with-jobs)
+
+Overview
+--------
 
 This document gives an overview of how Aurora works under the hood.
 It assumes you've already worked through the "hello world" example
@@ -26,7 +27,7 @@ cares about individual *tasks*, but typical jobs consist of dozens or
 hundreds of task replicas. Aurora provides a layer on top of Mesos with
 its `Job` abstraction. An Aurora `Job` consists of a task template and
 instructions for creating near-identical replicas of that task (modulo
-things like "shard id" or specific port numbers which may differ from
+things like "instance id" or specific port numbers which may differ from
 machine to machine).
 
 How many tasks make up a Job is complicated. On a basic level, a Job consists of
@@ -44,8 +45,8 @@ configuration file to do that called `new_hello_world.aurora` and
 issue a `aurora update --shards=0-1 <job_key_value> new_hello_world.aurora`
 command.
 
-This results in shards 0 and 1 having 1 cpu, 2 GB of RAM, and 1 GB of disk space,
-while shards 2 and 3 have 1 cpu, 1 GB of RAM, and 1 GB of disk space. If shard 3
+This results in instances 0 and 1 having 1 cpu, 2 GB of RAM, and 1 GB of disk space,
+while instances 2 and 3 have 1 cpu, 1 GB of RAM, and 1 GB of disk space. If instance 3
 dies and restarts, it restarts with 1 cpu, 1 GB RAM, and 1 GB disk space.
 
 So that means there are two simultaneous task configurations for the same Job
@@ -95,8 +96,8 @@ will be around forever, e.g. by building log saving or other
 checkpointing mechanisms directly into your application or into your
 `Job` description.
 
-<a name="Lifecycle"></a>Aurora Job Lifecycle
---------------------------------------------
+Job Lifecycle
+-------------
 
 When Aurora reads a configuration file and finds a `Job` definition, it:
 
@@ -109,11 +110,11 @@ When Aurora reads a configuration file and finds a `Job` definition, it:
 **Note**: It is not currently possible to create an Aurora job from
 within an Aurora job.
 
-### <a name="Life"></a>Life Of A Task
+### Life Of A Task
 
 ![Life of a task](images/lifeofatask.png)
 
-### <a name="Pending"></a>`PENDING` to `RUNNING` states
+### PENDING to RUNNING states
 
 When a `Task` is in the `PENDING` state, the scheduler constantly
 searches for machines satisfying that `Task`'s resource request
@@ -148,12 +149,12 @@ If there is a state mismatch, (e.g. a machine returns from a `netsplit`
 and the scheduler has marked all its `Task`s `LOST` and rescheduled
 them), a state reconciliation process kills the errant `RUNNING` tasks,
 which may take up to an hour. But to emphasize this point: there is no
-uniqueness guarantee for a single shard of a job in the presence of
+uniqueness guarantee for a single instance of a job in the presence of
 network partitions. If the Task requires that, it should be baked in at
 the application level using a distributed coordination service such as
 Zookeeper.
 
-### <a name="Updates"></a>Task Updates
+### Task Updates
 
 `Job` configurations can be updated at any point in their lifecycle.
 Usually updates are done incrementally using a process called a *rolling
@@ -165,16 +166,16 @@ by examining the current job config state and the new desired job config.
 It then starts a rolling batched update process by going through every batch
 and performing these operations:
 
--   If a shard ID is present in the scheduler but isn't in the new config,
-    then that shard is killed.
--   If a shard ID is not present in the scheduler but is present in
-    the new config, then the shard is created.
--   If a shard ID is present in both the scheduler the new config, then
-    the client diffs both task configs. If it detects any changes, it
-    performs a shard update where it kills the old config shard and adds
-    the new config shard.
+- If an instance is present in the scheduler but isn't in the new config,
+  then that instance is killed.
+- If an instance is not present in the scheduler but is present in
+  the new config, then the instance is created.
+- If an instance is present in both the scheduler the new config, then
+  the client diffs both task configs. If it detects any changes, it
+  performs an instance update by killing the old config instance and adds
+  the new config instance.
 
-The Aurora client continues through the shards list until all tasks are
+The Aurora client continues through the instance list until all tasks are
 updated, in `RUNNING,` and healthy for a configurable amount of time.
 If the client determines the update is not going well (a percentage of health
 checks have failed), it cancels the update.
@@ -185,7 +186,7 @@ with old instance configs and batch updates proceed backwards
 from the point where the update failed. E.g.; (0,1,2) (3,4,5) (6,7,
 8-FAIL) results in a rollback in order (8,7,6) (5,4,3) (2,1,0).
 
-### <a name="Giving"></a> Giving Priority to Production Tasks: PREEMPTING
+### Giving Priority to Production Tasks: PREEMPTING
 
 Sometimes a Task needs to be interrupted, such as when a non-production
 Task's resources are needed by a higher priority production Task. This
@@ -193,9 +194,9 @@ type of interruption is called a *pre-emption*. When this happens in
 Aurora, the non-production Task is killed and moved into
 the `PREEMPTING` state  when both the following are true:
 
--   The task being killed is a non-production task.
--   The other task is a `PENDING` production task that hasn't been
-    scheduled due to a lack of resources.
+- The task being killed is a non-production task.
+- The other task is a `PENDING` production task that hasn't been
+  scheduled due to a lack of resources.
 
 Since production tasks are much more important, Aurora kills off the
 non-production task to free up resources for the production task. The
@@ -205,7 +206,7 @@ production task. At some point, tasks in `PREEMPTING` move to `KILLED`.
 Note that non-production tasks consuming many resources are likely to be
 preempted in favor of production tasks.
 
-### <a name="Natural></a> Natural Termination: `FINISHED`, `FAILED`
+### Natural Termination: FINISHED, FAILED
 
 A `RUNNING` `Task` can terminate without direct user interaction. For
 example, it may be a finite computation that finishes, even something as
@@ -215,7 +216,7 @@ processes have succeeded with exit status `0` or finished without
 reaching failure limits) it moves into `FINISHED` state. If it finished
 after reaching a set of failure limits, it goes into `FAILED` state.
 
-### <a name="Forceful"></a> Forceful Termination: `KILLING`, `RESTARTING`
+### Forceful Termination: KILLING, RESTARTING
 
 You can terminate a `Task` by issuing an `aurora kill` command, which
 moves it into `KILLING` state. The scheduler then sends the slave  a
@@ -227,7 +228,7 @@ is forced into the `RESTARTING` state, the scheduler kills the
 underlying task but in parallel schedules an identical replacement for
 it.
 
-<a name="Configuration"></a>Configuration
+Configuration
 -------------
 
 You define and configure your Jobs (and their Tasks and Processes) in
@@ -238,52 +239,54 @@ with specific Aurora, Mesos, and Thermos commands and methods. See the
 [Configuration Guide and Reference](configurationreference.md) and
 [Configuration Tutorial](configurationtutorial.md).
 
-<a name="Creating"></a>Creating Aurora Jobs
--------------------------------------------
+Creating Jobs
+-------------
 
 You create and manipulate Aurora Jobs with the Aurora client, which starts all its
 command line commands with
 `aurora`. See [Aurora Client Commands](clientcommands.md) for details
 about the Aurora Client.
 
-<a name="Interacting"></a>Interacting With Aurora Jobs
-------------------------------------------------------
+Interacting With Jobs
+---------------------
 
 You interact with Aurora jobs either via:
 
--   Read-only Web UIs
+- Read-only Web UIs
+
+  Part of the output from creating a new Job is a URL for the Job's scheduler UI page.
 
-    Part of the output from creating a new Job is a URL for the Job's scheduler UI page.
-For example:
+  For example:
 
-        vagrant@precise64:~$ aurora create example/www-data/prod/hello /vagrant/examples/jobs/hello_world.aurora
-        INFO] Creating job hello
-        INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
-        INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
+      vagrant@precise64:~$ aurora create example/www-data/prod/hello \
+      /vagrant/examples/jobs/hello_world.aurora
+      INFO] Creating job hello
+      INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
+      INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
 
-    The "Job url" goes to the Job's scheduler UI page. To go to the overall scheduler UI page, stop at the "scheduler" part of the URL, in this case, `http://precise64:8081/scheduler`
+  The "Job url" goes to the Job's scheduler UI page. To go to the overall scheduler UI page,
+  stop at the "scheduler" part of the URL, in this case, `http://precise64:8081/scheduler`
 
-    You can also reach the scheduler UI page via the Client command `aurora open`:
+  You can also reach the scheduler UI page via the Client command `aurora open`:
 
-    > `aurora open [<cluster>[/<role>[/<env>/<job_name>]]]`
+      aurora open [<cluster>[/<role>[/<env>/<job_name>]]]
 
-    If only the cluster is specified, it goes directly to that cluster's
-scheduler main page. If the role is specified, it goes to the top-level
-role page. If the full job key is specified, it goes directly to the job
-page where you can inspect individual tasks.
+  If only the cluster is specified, it goes directly to that cluster's scheduler main page.
+  If the role is specified, it goes to the top-level role page. If the full job key is specified,
+  it goes directly to the job page where you can inspect individual tasks.
 
-    Once you click through to a role page, you see Jobs arranged
-separately by pending jobs, active jobs, and finished jobs.
-Jobs are arranged by role, typically a service account for
-production jobs and user accounts for test or development jobs.
+  Once you click through to a role page, you see Jobs arranged separately by pending jobs, active
+  jobs, and finished jobs. Jobs are arranged by role, typically a service account for production
+  jobs and user accounts for test or development jobs.
 
--   The Aurora Client's command line interface
+- The Aurora Client's command line interface
 
-    Several Client commands have a `-o` option that automatically opens a window to
-the specified Job's scheduler UI URL. And, as described above, the `open` command also takes
-you there.
+  Several Client commands have a `-o` option that automatically opens a window to
+  the specified Job's scheduler UI URL. And, as described above, the `open` command also takes
+  you there.
 
-    For a complete list of Aurora Client commands, use `aurora help` and, for specific
-command help, `aurora help [command]`. **Note**: `aurora help open`
-returns `"subcommand open not found"` due to our reflection tricks not
-working on words that are also builtin Python function names. Or see the [Aurora Client Commands](clientcommands.md) document.
+  For a complete list of Aurora Client commands, use `aurora help` and, for specific
+  command help, `aurora help [command]`. **Note**: `aurora help open`
+  returns `"subcommand open not found"` due to our reflection tricks not
+  working on words that are also builtin Python function names. Or see the
+  [Aurora Client Commands](clientcommands.md) document.