You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@aurora.apache.org by wf...@apache.org on 2014/04/24 04:37:04 UTC

[2/3] AURORA-48: Rename documentation filenames in associated links to be SEO friendly

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/94b723b0/docs/configuration-tutorial.md
----------------------------------------------------------------------
diff --git a/docs/configuration-tutorial.md b/docs/configuration-tutorial.md
new file mode 100644
index 0000000..c7d970c
--- /dev/null
+++ b/docs/configuration-tutorial.md
@@ -0,0 +1,1139 @@
+Aurora Configuration Tutorial
+=============================
+
+How to write Aurora configuration files, including feature descriptions
+and best practices. When writing a configuration file, make use of
+`aurora inspect`. It takes the same job key and configuration file
+arguments as `aurora create` or `aurora update`. It first ensures the
+configuration parses, then outputs it in human-readable form.
+
+You should read this after going through the general [Aurora Tutorial](tutorial.md).
+
+[The Basics](#Basics)
+    [Use Bottom-To-Top Object Ordering](#Bottom)
+[An Example Configuration File](#Example)
+[Defining Process Objects](#Process)
+[Getting Your Code Into The Sandbox](#Sandbox)
+[Defining Task Objects](#Task)
+    [`SequentialTask`](#Sequential)
+    [`SimpleTask`](#Simple)
+    [`Tasks.concat` and `Tasks.combine`](#Concat)
+[Defining `Job` Objects](#Job)
+[Defining The `jobs` List](#jobs)
+[Templating](#Templating)
+[Templating 1: Binding in Pystachio](#Binding)
+[Structurals in Pystachio / Aurora](#Structurals)
+    [Mustaches Within Structurals](#Mustaches)
+[Templating 2: Structurals Are Factories](#Factories)
+    [A Second Way of Templating](#Second)
+[Advanced Binding](#AdvancedBinding)
+[Bind Syntax](#BindSyntax)
+    [Binding Complex Objects](#ComplexObjects)
+[Structural Binding](#StructuralBinding)
+[Configuration File Writing Tips And Best Practices](#Tips)
+    [Use As Few `.aurora` Files As Possible](#Few)
+    [Avoid Boilerplate](#Boilerplate)
+    [Thermos Uses bash, But Thermos Is Not bash](#Bash)
+    [Rarely Use Functions In Your Configurations](#Functions)
+
+The Basics
+----------
+
+To run a job on Aurora, you must specify a configuration file that tells
+Aurora what it needs to know to schedule the job, what Mesos needs to
+run the tasks the job is made up of, and what Thermos needs to run the
+processes that make up the tasks. This file must have
+a`.aurora` suffix.
+
+A configuration file defines a collection of objects, along with parameter
+values for their attributes. An Aurora configuration file contains the
+following three types of objects:
+
+- Job
+- Task
+- Process
+
+A configuration also specifies a list of `Job` objects assigned
+to the variable `jobs`.
+
+- jobs (list of defined Jobs to run)
+
+The `.aurora` file format is just Python. However, `Job`, `Task`,
+`Process`, and other classes are defined by a type-checked dictionary
+templating library called *Pystachio*, a powerful tool for
+configuration specification and reuse. Pystachio objects are tailored
+via {{}} surrounded templates.
+
+When writing your `.aurora` file, you may use any Pystachio datatypes, as
+well as any objects shown in the [*Aurora+Thermos Configuration
+Reference*](configuration-reference.md), without `import` statements - the
+Aurora config loader injects them automatically. Other than that, an `.aurora`
+file works like any other Python script.
+
+[*Aurora+Thermos Configuration Reference*](configuration-reference.md)
+has a full reference of all Aurora/Thermos defined Pystachio objects.
+
+### Use Bottom-To-Top Object Ordering
+
+A well-structured configuration starts with structural templates (if
+any). Structural templates encapsulate in their attributes all the
+differences between Jobs in the configuration that are not directly
+manipulated at the `Job` level, but typically at the `Process` or `Task`
+level. For example, if certain processes are invoked with slightly
+different settings or input.
+
+After structural templates, define, in order, `Process`es, `Task`s, and
+`Job`s.
+
+Structural template names should be *UpperCamelCased* and their
+instantiations are typically *UPPER\_SNAKE\_CASED*. `Process`, `Task`,
+and `Job` names are typically *lower\_snake\_cased*. Indentation is typically 2
+spaces.
+
+An Example Configuration File
+-----------------------------
+
+The following is a typical configuration file. Don't worry if there are
+parts you don't understand yet, but you may want to refer back to this
+as you read about its individual parts. Note that names surrounded by
+curly braces {{}} are template variables, which the system replaces with
+bound values for the variables.
+
+    # --- templates here ---
+	class Profile(Struct):
+	  package_version = Default(String, 'live')
+	  java_binary = Default(String, '/usr/lib/jvm/java-1.7.0-openjdk/bin/java')
+	  extra_jvm_options = Default(String, '')
+	  parent_environment = Default(String, 'prod')
+	  parent_serverset = Default(String,
+                                 '/foocorp/service/bird/{{parent_environment}}/bird')
+
+	# --- processes here ---
+	main = Process(
+	  name = 'application',
+	  cmdline = '{{profile.java_binary}} -server -Xmx1792m '
+	            '{{profile.extra_jvm_options}} '
+	            '-jar application.jar '
+	            '-upstreamService {{profile.parent_serverset}}'
+	)
+
+	# --- tasks ---
+	base_task = SequentialTask(
+	  name = 'application',
+	  processes = [
+	    Process(
+	      name = 'fetch',
+	      cmdline = 'curl -O
+                  https://packages.foocorp.com/{{profile.package_version}}/application.jar'),
+	  ]
+	)
+
+        # not always necessary but often useful to have separate task
+        # resource classes
+        staging_task = base_task(resources =
+                         Resources(cpu = 1.0,
+                                   ram = 2048*MB,
+                                   disk = 1*GB))
+	production_task = base_task(resources =
+                            Resources(cpu = 4.0,
+                                      ram = 2560*MB,
+                                      disk = 10*GB))
+
+	# --- job template ---
+	job_template = Job(
+	  name = 'application',
+	  role = 'myteam',
+	  contact = 'myteam-team@foocorp.com',
+	  instances = 20,
+	  service = True,
+	  task = production_task
+	)
+
+	# -- profile instantiations (if any) ---
+	PRODUCTION = Profile()
+	STAGING = Profile(
+	  extra_jvm_options = '-Xloggc:gc.log',
+	  parent_environment = 'staging'
+	)
+
+	# -- job instantiations --
+	jobs = [
+          job_template(cluster = 'cluster1', environment = 'prod')
+	               .bind(profile = PRODUCTION),
+
+          job_template(cluster = 'cluster2', environment = 'prod')
+	                .bind(profile = PRODUCTION),
+
+          job_template(cluster = 'cluster1',
+                        environment = 'staging',
+			service = False,
+			task = staging_task,
+			instances = 2)
+			.bind(profile = STAGING),
+	]
+
+## Defining Process Objects
+
+Processes are handled by the Thermos system. A process is a single
+executable step run as a part of an Aurora task, which consists of a
+bash-executable statement.
+
+The key (and required) `Process` attributes are:
+
+-   `name`: Any string which is a valid Unix filename (no slashes,
+    NULLs, or leading periods). The `name` value must be unique relative
+    to other Processes in a `Task`.
+-   `cmdline`: A command line run in a bash subshell, so you can use
+    bash scripts. Nothing is supplied for command-line arguments,
+    so `$*` is unspecified.
+
+Many tiny processes make managing configurations more difficult. For
+example, the following is a bad way to define processes.
+
+    copy = Process(
+      name = 'copy',
+      cmdline = 'curl -O https://packages.foocorp.com/app.zip'
+    )
+    unpack = Process(
+      name = 'unpack',
+      cmdline = 'unzip app.zip'
+    )
+    remove = Process(
+      name = 'remove',
+      cmdline = 'rm -f app.zip'
+    )
+    run = Process(
+      name = 'app',
+      cmdline = 'java -jar app.jar'
+    )
+    run_task = Task(
+      processes = [copy, unpack, remove, run],
+      constraints = order(copy, unpack, remove, run)
+    )
+
+Since `cmdline` runs in a bash subshell, you can chain commands
+with `&&` or `||`.
+
+When defining a `Task` that is just a list of Processes run in a
+particular order, use `SequentialTask`, as described in the [*Defining*
+`Task` *Objects*](#Task) section. The following simplifies and combines the
+above multiple `Process` definitions into just two.
+
+    stage = Process(
+      name = 'stage',
+      cmdline = 'curl -O https://packages.foocorp.com/app.zip && '
+                'unzip app.zip && rm -f app.zip')
+
+    run = Process(name = 'app', cmdline = 'java -jar app.jar')
+
+    run_task = SequentialTask(processes = [stage, run])
+
+`Process` also has five optional attributes, each with a default value
+if one isn't specified in the configuration:
+
+-   `max_failures`: Defaulting to `1`, the maximum number of failures
+    (non-zero exit statuses) before this `Process` is marked permanently
+    failed and not retried. If a `Process` permanently fails, Thermos
+    checks the `Process` object's containing `Task` for the task's
+    failure limit (usually 1) to determine whether or not the `Task`
+    should be failed. Setting `max_failures`to `0` means that this
+    process will keep retrying until a successful (zero) exit status is
+    achieved. Retries happen at most once every `min_duration` seconds
+    to prevent effectively mounting a denial of service attack against
+    the coordinating scheduler.
+
+-   `daemon`: Defaulting to `False`, if `daemon` is set to `True`, a
+    successful (zero) exit status does not prevent future process runs.
+    Instead, the `Process` reinvokes after `min_duration` seconds.
+    However, the maximum failure limit (`max_failures`) still
+    applies. A combination of `daemon=True` and `max_failures=0` retries
+    a `Process` indefinitely regardless of exit status. This should
+    generally be avoided for very short-lived processes because of the
+    accumulation of checkpointed state for each process run. When
+    running in Aurora, `max_failures` is capped at
+    100.
+
+-   `ephemeral`: Defaulting to `False`, if `ephemeral` is `True`, the
+    `Process`' status is not used to determine if its bound `Task` has
+    completed. For example, consider a `Task` with a
+    non-ephemeral webserver process and an ephemeral logsaver process
+    that periodically checkpoints its log files to a centralized data
+    store. The `Task` is considered finished once the webserver process
+    finishes, regardless of the logsaver's current status.
+
+-   `min_duration`: Defaults to `15`. Processes may succeed or fail
+    multiple times during a single Task. Each result is called a
+    *process run* and this value is the minimum number of seconds the
+    scheduler waits before re-running the same process.
+
+-   `final`: Defaulting to `False`, this is a finalizing `Process` that
+    should run last. Processes can be grouped into two classes:
+    *ordinary* and *finalizing*. By default, Thermos Processes are
+    ordinary. They run as long as the `Task` is considered
+    healthy (i.e. hasn't reached a failure limit). But once all regular
+    Thermos Processes have either finished or the `Task` has reached a
+    certain failure threshold, Thermos moves into a *finalization* stage
+    and runs all finalizing Processes. These are typically necessary for
+    cleaning up after the `Task`, such as log checkpointers, or perhaps
+    e-mail notifications of a completed Task. Finalizing processes may
+    not depend upon ordinary processes or vice-versa, however finalizing
+    processes may depend upon other finalizing processes and will
+    otherwise run as a typical process schedule.
+
+## Getting Your Code Into The Sandbox
+
+When using Aurora, you need to get your executable code into its "sandbox", specifically
+the Task sandbox where the code executes for the Processes that make up that Task.
+
+Each Task has a sandbox created when the Task starts and garbage
+collected when it finishes. All of a Task's processes run in its
+sandbox, so processes can share state by using a shared current
+working directory.
+
+Typically, you save this code somewhere. You then need to define a Process
+in your `.aurora` configuration file that fetches the code from that somewhere
+to where the slave can see it. For a public cloud, that can be anywhere public on
+the Internet, such as S3. For a private cloud internal storage, you need to put in
+on an accessible HDFS cluster or similar storage.
+
+The template for this Process is:
+
+    <name> = Process(
+      name = '<name>'
+      cmdline = '<command to copy and extract code archive into current working directory>'
+    )
+
+Note: Be sure the extracted code archive has an executable.
+
+## Defining Task Objects
+
+Tasks are handled by Mesos. A task is a collection of processes that
+runs in a shared sandbox. It's the fundamental unit Aurora uses to
+schedule the datacenter; essentially what Aurora does is find places
+in the cluster to run tasks.
+
+The key (and required) parts of a Task are:
+
+-   `name`: A string giving the Task's name. By default, if a Task is
+    not given a name, it inherits the first name in its Process list.
+
+-   `processes`: An unordered list of Process objects bound to the Task.
+    The value of the optional `constraints` attribute affects the
+    contents as a whole. Currently, the only constraint, `order`, determines if
+    the processes run in parallel or sequentially.
+
+-   `resources`: A `Resource` object defining the Task's resource
+        footprint. A `Resource` object has three attributes:
+        -   `cpu`: A Float, the fractional number of cores the Task
+        requires.
+        -   `ram`: An Integer, RAM bytes the Task requires.
+        -   `disk`: An integer, disk bytes the Task requires.
+
+A basic Task definition looks like:
+
+    Task(
+        name="hello_world",
+        processes=[Process(name = "hello_world", cmdline = "echo hello world")],
+        resources=Resources(cpu = 1.0,
+                            ram = 1*GB,
+                            disk = 1*GB))
+
+There are four optional Task attributes:
+
+-   `constraints`: A list of `Constraint` objects that constrain the
+    Task's processes. Currently there is only one type, the `order`
+    constraint. For example the following requires that the processes
+    run in the order `foo`, then `bar`.
+
+        constraints = [Constraint(order=['foo', 'bar'])]
+
+    There is an `order()` function that takes `order('foo', 'bar', 'baz')`
+    and converts it into `[Constraint(order=['foo', 'bar', 'baz'])]`.
+    `order()` accepts Process name strings `('foo', 'bar')` or the processes
+    themselves, e.g. `foo=Process(name='foo', ...)`, `bar=Process(name='bar', ...)`,
+    `constraints=order(foo, bar)`
+
+    Note that Thermos rejects tasks with process cycles.
+
+-   `max_failures`: Defaulting to `1`, the number of failed processes
+    needed for the `Task` to be marked as failed. Note how this
+    interacts with individual Processes' `max_failures` values. Assume a
+    Task has two Processes and a `max_failures` value of `2`. So both
+    Processes must fail for the Task to fail. Now, assume each of the
+    Task's Processes has its own `max_failures` value of `10`. If
+    Process "A" fails 5 times before succeeding, and Process "B" fails
+    10 times and is then marked as failing, their parent Task succeeds.
+    Even though there were 15 individual failures by its Processes, only
+    1 of its Processes was finally marked as failing. Since 1 is less
+    than the 2 that is the Task's `max_failures` value, the Task does
+    not fail.
+
+-   `max_concurrency`: Defaulting to `0`, the maximum number of
+    concurrent processes in the Task. `0` specifies unlimited
+    concurrency. For Tasks with many expensive but otherwise independent
+    processes, you can limit the amount of concurrency Thermos schedules
+    instead of artificially constraining them through `order`
+    constraints. For example, a test framework may generate a Task with
+    100 test run processes, but runs it in a Task with
+    `resources.cpus=4`. Limit the amount of parallelism to 4 by setting
+    `max_concurrency=4`.
+
+-   `finalization_wait`: Defaulting to `30`, the number of seconds
+    allocated for finalizing the Task's processes. A Task starts in
+    `ACTIVE` state when Processes run and stays there as long as the Task
+    is healthy and Processes run. When all Processes finish successfully
+    or the Task reaches its maximum process failure limit, it goes into
+    `CLEANING` state. In `CLEANING`, it sends `SIGTERMS` to any still running
+    Processes. When all Processes terminate, the Task goes into
+    `FINALIZING` state and invokes the schedule of all processes whose
+    final attribute has a True value. Everything from the end of `ACTIVE`
+    to the end of `FINALIZING` must happen within `finalization_wait`
+    number of seconds. If not, all still running Processes are sent
+    `SIGKILL`s (or if dependent on yet to be completed Processes, are
+    never invoked).
+
+### SequentialTask: Running Processes in Parallel or Sequentially
+
+By default, a Task with several Processes runs them in parallel. There
+are two ways to run Processes sequentially:
+
+-   Include an `order` constraint in the Task definition's `constraints`
+    attribute whose arguments specify the processes' run order:
+
+        Task( ... processes=[process1, process2, process3],
+	          constraints = order(process1, process2, process3), ...)
+
+-   Use `SequentialTask` instead of `Task`; it automatically runs
+    processes in the order specified in the `processes` attribute. No
+    `constraint` parameter is needed:
+
+        SequentialTask( ... processes=[process1, process2, process3] ...)
+
+### SimpleTask
+
+For quickly creating simple tasks, use the `SimpleTask` helper. It
+creates a basic task from a provided name and command line using a
+default set of resources. For example, in a .`aurora` configuration
+file:
+
+    SimpleTask(name="hello_world", command="echo hello world")
+
+is equivalent to
+
+    Task(name="hello_world",
+         processes=[Process(name = "hello_world", cmdline = "echo hello world")],
+         resources=Resources(cpu = 1.0,
+                             ram = 1*GB,
+                             disk = 1*GB))
+
+The simplest idiomatic Job configuration thus becomes:
+
+    import os
+    hello_world_job = Job(
+      task=SimpleTask(name="hello_world", command="echo hello world"),
+      role=os.getenv('USER'),
+      cluster="cluster1")
+
+When written to `hello_world.aurora`, you invoke it with a simple
+`aurora create cluster1/$USER/test/hello_world hello_world.aurora`.
+
+### Combining tasks
+
+`Tasks.concat`(synonym,`concat_tasks`) and
+`Tasks.combine`(synonym,`combine_tasks`) merge multiple Task definitions
+into a single Task. It may be easier to define complex Jobs
+as smaller constituent Tasks. But since a Job only includes a single
+Task, the subtasks must be combined before using them in a Job.
+Smaller Tasks can also be reused between Jobs, instead of having to
+repeat their definition for multiple Jobs.
+
+With both methods, the merged Task takes the first Task's name. The
+difference between the two is the result Task's process ordering.
+
+-   `Tasks.combine` runs its subtasks' processes in no particular order.
+    The new Task's resource consumption is the sum of all its subtasks'
+    consumption.
+
+-   `Tasks.concat` runs its subtasks in the order supplied, with each
+    subtask's processes run serially between tasks. It is analogous to
+    the `order` constraint helper, except at the Task level instead of
+    the Process level. The new Task's resource consumption is the
+    maximum value specified by any subtask for each Resource attribute
+    (cpu, ram and disk).
+
+For example, given the following:
+
+    setup_task = Task(
+      ...
+      processes=[download_interpreter, update_zookeeper],
+      # It is important to note that {{Tasks.concat}} has
+      # no effect on the ordering of the processes within a task;
+      # hence the necessity of the {{order}} statement below
+      # (otherwise, the order in which {{download_interpreter}}
+      # and {{update_zookeeper}} run will be non-deterministic)
+      constraints=order(download_interpreter, update_zookeeper),
+      ...
+    )
+
+    run_task = SequentialTask(
+      ...
+      processes=[download_application, start_application],
+      ...
+    )
+
+    combined_task = Tasks.concat(setup_task, run_task)
+
+The `Tasks.concat` command merges the two Tasks into a single Task and
+ensures all processes in `setup_task` run before the processes
+in `run_task`. Conceptually, the task is reduced to:
+
+    task = Task(
+      ...
+      processes=[download_interpreter, update_zookeeper,
+                 download_application, start_application],
+      constraints=order(download_interpreter, update_zookeeper,
+                        download_application, start_application),
+      ...
+    )
+
+In the case of `Tasks.combine`, the two schedules run in parallel:
+
+    task = Task(
+      ...
+      processes=[download_interpreter, update_zookeeper,
+                 download_application, start_application],
+      constraints=order(download_interpreter, update_zookeeper) +
+                        order(download_application, start_application),
+      ...
+    )
+
+In the latter case, each of the two sequences may operate in parallel.
+Of course, this may not be the intended behavior (for example, if
+the `start_application` Process implicitly relies
+upon `download_interpreter`). Make sure you understand the difference
+between using one or the other.
+
+## Defining Job Objects
+
+A job is a group of identical tasks that Aurora can run in a Mesos cluster.
+
+A `Job` object is defined by the values of several attributes, some
+required and some optional. The required attributes are:
+
+-   `task`: Task object to bind to this job. Note that a Job can
+    only take a single Task.
+
+-   `role`: Job's role account; in other words, the user account to run
+    the job as on a Mesos cluster machine. A common value is
+    `os.getenv('USER')`; using a Python command to get the user who
+    submits the job request. The other common value is the service
+    account that runs the job, e.g. `www-data`.
+
+-   `environment`: Job's environment, typical values
+    are `devel`, `test`, or `prod`.
+
+-   `cluster`: Aurora cluster to schedule the job in, defined in
+    `/etc/aurora/clusters.json` or `~/.clusters.json`. You can specify
+    jobs where the only difference is the `cluster`, then at run time
+    only run the Job whose job key includes your desired cluster's name.
+
+You usually see a `name` parameter. By default, `name` inherits its
+value from the Job's associated Task object, but you can override this
+default. For these four parameters, a Job definition might look like:
+
+    foo_job = Job( name = 'foo', cluster = 'cluster1',
+              role = os.getenv('USER'), environment = 'prod',
+              task = foo_task)
+
+In addition to the required attributes, there are several optional
+attributes. The first (strongly recommended) optional attribute is:
+
+-   `contact`: An email address for the Job's owner. For production
+    jobs, it is usually a team mailing list.
+
+Two more attributes deal with how to handle failure of the Job's Task:
+
+-   `max_task_failures`: An integer, defaulting to `1`, of the maximum
+    number of Task failures after which the Job is considered failed.
+    `-1` allows for infinite failures.
+
+-   `service`: A boolean, defaulting to `False`, which if `True`
+    restarts tasks regardless of whether they succeeded or failed. In
+    other words, if `True`, after the Job's Task completes, it
+    automatically starts again. This is for Jobs you want to run
+    continuously, rather than doing a single run.
+
+Three attributes deal with configuring the Job's Task:
+
+-   `instances`: Defaulting to `1`, the number of
+    instances/replicas/shards of the Job's Task to create.
+
+-   `priority`: Defaulting to `0`, the Job's Task's preemption priority,
+    for which higher values may preempt Tasks from Jobs with lower
+    values.
+
+-   `production`: a Boolean, defaulting to `False`, specifying that this
+    is a production job backed by quota. Tasks from production Jobs may
+    preempt tasks from any non-production job, and may only be preempted
+    by tasks from production jobs in the same role with higher
+    priority. **WARNING**: To run Jobs at this level, the Job role must
+    have the appropriate quota.
+
+The final three Job attributes each take an object as their value.
+
+-   `update_config`: An `UpdateConfig`
+    object provides parameters for controlling the rate and policy of
+    rolling updates. The `UpdateConfig` parameters are:
+    -   `batch_size`: An integer, defaulting to `1`, specifying the
+        maximum number of shards to update in one iteration.
+    -   `restart_threshold`: An integer, defaulting to `60`, specifying
+        the maximum number of seconds before a shard must move into the
+        `RUNNING` state before considered a failure.
+    -   `watch_secs`: An integer, defaulting to `30`, specifying the
+        minimum number of seconds a shard must remain in the `RUNNING`
+        state before considered a success.
+    -   `max_per_shard_failures`: An integer, defaulting to `0`,
+        specifying the maximum number of restarts per shard during an
+        update. When the limit is exceeded, it increments the total
+        failure count.
+    -   `max_total_failures`: An integer, defaulting to `0`, specifying
+        the maximum number of shard failures tolerated during an update.
+        Cannot be equal to or greater than the job's total number of
+        tasks.
+-   `health_check_config`: A `HealthCheckConfig` object that provides
+    parameters for controlling a Task's health checks via HTTP. Only
+    used if a health port was assigned with a command line wildcard. The
+    `HealthCheckConfig` parameters are:
+    -   `initial_interval_secs`: An integer, defaulting to `60`,
+        specifying the initial delay for doing an HTTP health check.
+    -   `interval_secs`: An integer, defaulting to `30`, specifying the
+        number of seconds in the interval between checking the Task's
+        health.
+    -   `timeout_secs`: An integer, defaulting to `1`, specifying the
+        number of seconds the application must respond to an HTTP health
+        check with `OK` before it is considered a failure.
+    -   `max_consecutive_failures`: An integer, defaulting to `0`,
+        specifying the maximum number of consecutive failures before a
+        task is unhealthy.
+-   `constraints`: A `dict` Python object, specifying Task scheduling
+    constraints. Most users will not need to specify constraints, as the
+    scheduler automatically inserts reasonable defaults. Please do not
+    set this field unless you are sure of what you are doing. See the
+    section in the Aurora + Thermos Reference manual on [Specifying
+    Scheduling Constraints](configuration-reference.md) for more information.
+
+## The jobs List
+
+At the end of your `.aurora` file, you need to specify a list of the
+file's defined Jobs to run in the order listed. For example, the
+following runs first `job1`, then `job2`, then `job3`.
+
+jobs = [job1, job2, job3]
+
+Templating
+----------
+
+The `.aurora` file format is just Python. However, `Job`, `Task`,
+`Process`, and other classes are defined by a templating library called
+*Pystachio*, a powerful tool for configuration specification and reuse.
+
+[Aurora+Thermos Configuration Reference](configuration-reference.md)
+has a full reference of all Aurora/Thermos defined Pystachio objects.
+
+When writing your `.aurora` file, you may use any Pystachio datatypes, as
+well as any objects shown in the *Aurora+Thermos Configuration
+Reference* without `import` statements - the Aurora config loader
+injects them automatically. Other than that the `.aurora` format
+works like any other Python script.
+
+### Templating 1: Binding in Pystachio
+
+Pystachio uses the visually distinctive {{}} to indicate template
+variables. These are often called "mustache variables" after the
+similarly appearing variables in the Mustache templating system and
+because the curly braces resemble mustaches.
+
+If you are familiar with the Mustache system, templates in Pystachio
+have significant differences. They have no nesting, joining, or
+inheritance semantics. On the other hand, when evaluated, templates
+are evaluated iteratively, so this affords some level of indirection.
+
+Let's start with the simplest template; text with one
+variable, in this case `name`;
+
+    Hello {{name}}
+
+If we evaluate this as is, we'd get back:
+
+    Hello
+
+If a template variable doesn't have a value, when evaluated it's
+replaced with nothing. If we add a binding to give it a value:
+
+    { "name" : "Tom" }
+
+We'd get back:
+
+    Hello Tom
+
+We can also use {{}} variables as sectional variables. Let's say we
+have:
+
+    {{#x}} Testing... {{/x}}
+
+If `x` evaluates to `True`, the text between the sectional tags is
+shown. If there is no value for `x` or it evaluates to `False`, the
+between tags text is not shown. So, at a basic level, a sectional
+variable acts as a conditional.
+
+However, if the sectional variable evaluates to a list, array, etc. it
+acts as a `foreach`. For example,
+
+    {{#x}} {{name}} {{/x}}
+
+with
+
+    { "x": [ { "name" : "tic" } { "name" : "tac" } { "name" : "toe" } ] }
+
+evaluates to
+
+    tic tac toe
+
+Every Pystachio object has an associated `.bind` method that can bind
+values to {{}} variables. Bindings are not immediately evaluated.
+Instead, they are evaluated only when the interpolated value of the
+object is necessary, e.g. for performing equality or serializing a
+message over the wire.
+
+Objects with and without mustache templated variables behave
+differently:
+
+    >>> Float(1.5)
+    Float(1.5)
+
+    >>> Float('{{x}}.5')
+    Float({{x}}.5)
+
+    >>> Float('{{x}}.5').bind(x = 1)
+    Float(1.5)
+
+    >>> Float('{{x}}.5').bind(x = 1) == Float(1.5)
+    True
+
+    >>> contextual_object = String('{{metavar{{number}}}}').bind(
+    ... metavar1 = "first", metavar2 = "second")
+
+    >>> contextual_object
+    String({{metavar{{number}}}})
+
+    >>> contextual_object.bind(number = 1)
+    String(first)
+
+    >>> contextual_object.bind(number = 2)
+    String(second)
+
+You usually bind simple key to value pairs, but you can also bind three
+other objects: lists, dictionaries, and structurals. These will be
+described in detail later.
+
+### Structurals in Pystachio / Aurora
+
+Most Aurora/Thermos users don't ever (knowingly) interact with `String`,
+`Float`, or `Integer` Pystashio objects directly. Instead they interact
+with derived structural (`Struct`) objects that are collections of
+fundamental and structural objects. The structural object components are
+called *attributes*. Aurora's most used structural objects are `Job`,
+`Task`, and `Process`:
+
+    class Process(Struct):
+      cmdline = Required(String)
+      name = Required(String)
+      max_failures = Default(Integer, 1)
+      daemon = Default(Boolean, False)
+      ephemeral = Default(Boolean, False)
+      min_duration = Default(Integer, 5)
+      final = Default(Boolean, False)
+
+Construct default objects by following the object's type with (). If you
+want an attribute to have a value different from its default, include
+the attribute name and value inside the parentheses.
+
+    >>> Process()
+    Process(daemon=False, max_failures=1, ephemeral=False,
+      min_duration=5, final=False)
+
+Attribute values can be template variables, which then receive specific
+values when creating the object.
+
+    >>> Process(cmdline = 'echo {{message}}')
+    Process(daemon=False, max_failures=1, ephemeral=False, min_duration=5,
+            cmdline=echo {{message}}, final=False)
+
+    >>> Process(cmdline = 'echo {{message}}').bind(message = 'hello world')
+    Process(daemon=False, max_failures=1, ephemeral=False, min_duration=5,
+            cmdline=echo hello world, final=False)
+
+A powerful binding property is that all of an object's children inherit its
+bindings:
+
+    >>> List(Process)([
+    ... Process(name = '{{prefix}}_one'),
+    ... Process(name = '{{prefix}}_two')
+    ... ]).bind(prefix = 'hello')
+    ProcessList(
+      Process(daemon=False, name=hello_one, max_failures=1, ephemeral=False, min_duration=5, final=False),
+      Process(daemon=False, name=hello_two, max_failures=1, ephemeral=False, min_duration=5, final=False)
+      )
+
+Remember that an Aurora Job contains Tasks which contain Processes. A
+Job level binding is inherited by its Tasks and all their Processes.
+Similarly a Task level binding is available to that Task and its
+Processes but is *not* visible at the Job level (inheritance is a
+one-way street.)
+
+#### Mustaches Within Structurals
+
+When you define a `Struct` schema, one powerful, but confusing, feature
+is that all of that structure's attributes are Mustache variables within
+the enclosing scope *once they have been populated*.
+
+For example, when `Process` is defined above, all its attributes such as
+{{`name`}}, {{`cmdline`}}, {{`max_failures`}} etc., are all immediately
+defined as Mustache variables, implicitly bound into the `Process`, and
+inherit all child objects once they are defined.
+
+Thus, you can do the following:
+
+    >>> Process(name = "installer", cmdline = "echo {{name}} is running")
+    Process(daemon=False, name=installer, max_failures=1, ephemeral=False, min_duration=5,
+            cmdline=echo installer is running, final=False)
+
+WARNING: This binding only takes place in one direction. For example,
+the following does NOT work and does not set the `Process` `name`
+attribute's value.
+
+    >>> Process().bind(name = "installer")
+    Process(daemon=False, max_failures=1, ephemeral=False, min_duration=5, final=False)
+
+The following is also not possible and results in an infinite loop that
+attempts to resolve `Process.name`.
+
+    >>> Process(name = '{{name}}').bind(name = 'installer')
+
+Do not confuse Structural attributes with bound Mustache variables.
+Attributes are implicitly converted to Mustache variables but not vice
+versa.
+
+### Templating 2: Structurals Are Factories
+
+#### A Second Way of Templating
+
+A second templating method is both as powerful as the aforementioned and
+often confused with it. This method is due to automatic conversion of
+Struct attributes to Mustache variables as described above.
+
+Suppose you create a Process object:
+
+    >>> p = Process(name = "process_one", cmdline = "echo hello world")
+
+    >>> p
+    Process(daemon=False, name=process_one, max_failures=1, ephemeral=False, min_duration=5,
+            cmdline=echo hello world, final=False)
+
+This `Process` object, "`p`", can be used wherever a `Process` object is
+needed. It can also be reused by changing the value(s) of its
+attribute(s). Here we change its `name` attribute from `process_one` to
+`process_two`.
+
+    >>> p(name = "process_two")
+    Process(daemon=False, name=process_two, max_failures=1, ephemeral=False, min_duration=5,
+            cmdline=echo hello world, final=False)
+
+Template creation is a common use for this technique:
+
+    >>> Daemon = Process(daemon = True)
+    >>> logrotate = Daemon(name = 'logrotate', cmdline = './logrotate conf/logrotate.conf')
+    >>> mysql = Daemon(name = 'mysql', cmdline = 'bin/mysqld --safe-mode')
+
+### Advanced Binding
+
+As described above, `.bind()` binds simple strings or numbers to
+Mustache variables. In addition to Structural types formed by combining
+atomic types, Pystachio has two container types; `List` and `Map` which
+can also be bound via `.bind()`.
+
+#### Bind Syntax
+
+The `bind()` function can take Python dictionaries or `kwargs`
+interchangeably (when "`kwargs`" is in a function definition, `kwargs`
+receives a Python dictionary containing all keyword arguments after the
+formal parameter list).
+
+    >>> String('{{foo}}').bind(foo = 'bar') == String('{{foo}}').bind({'foo': 'bar'})
+    True
+
+Bindings done "closer" to the object in question take precedence:
+
+    >>> p = Process(name = '{{context}}_process')
+    >>> t = Task().bind(context = 'global')
+    >>> t(processes = [p, p.bind(context = 'local')])
+    Task(processes=ProcessList(
+      Process(daemon=False, name=global_process, max_failures=1, ephemeral=False, final=False,
+              min_duration=5),
+      Process(daemon=False, name=local_process, max_failures=1, ephemeral=False, final=False,
+              min_duration=5)
+    ))
+
+#### Binding Complex Objects
+
+##### Lists
+
+    >>> fibonacci = List(Integer)([1, 1, 2, 3, 5, 8, 13])
+    >>> String('{{fib[4]}}').bind(fib = fibonacci)
+    String(5)
+
+##### Maps
+
+    >>> first_names = Map(String, String)({'Kent': 'Clark', 'Wayne': 'Bruce', 'Prince': 'Diana'})
+    >>> String('{{first[Kent]}}').bind(first = first_names)
+    String(Clark)
+
+##### Structurals
+
+    >>> String('{{p.cmdline}}').bind(p = Process(cmdline = "echo hello world"))
+    String(echo hello world)
+
+### Structural Binding
+
+Use structural templates when binding more than two or three individual
+values at the Job or Task level. For fewer than two or three, standard
+key to string binding is sufficient.
+
+Structural binding is a very powerful pattern and is most useful in
+Aurora/Thermos for doing Structural configuration. For example, you can
+define a job profile. The following profile uses `HDFS`, the Hadoop
+Distributed File System, to designate a file's location. `HDFS` does
+not come with Aurora, so you'll need to either install it separately
+or change the way the dataset is designated.
+
+    class Profile(Struct):
+      version = Required(String)
+      environment = Required(String)
+      dataset = Default(String, hdfs://home/aurora/data/{{environment}}')
+
+    PRODUCTION = Profile(version = 'live', environment = 'prod')
+    DEVEL = Profile(version = 'latest',
+                    environment = 'devel',
+                    dataset = 'hdfs://home/aurora/data/test')
+    TEST = Profile(version = 'latest', environment = 'test')
+
+    JOB_TEMPLATE = Job(
+      name = 'application',
+      role = 'myteam',
+      cluster = 'cluster1',
+      environment = '{{profile.environment}}',
+      task = SequentialTask(
+        name = 'task',
+        resources = Resources(cpu = 2, ram = 4*GB, disk = 8*GB),
+        processes = [
+	  Process(name = 'main', cmdline = 'java -jar application.jar -hdfsPath
+                 {{profile.dataset}}')
+        ]
+       )
+     )
+
+    jobs = [
+      JOB_TEMPLATE(instances = 100).bind(profile = PRODUCTION),
+      JOB_TEMPLATE.bind(profile = DEVEL),
+      JOB_TEMPLATE.bind(profile = TEST),
+     ]
+
+In this case, a custom structural "Profile" is created to self-document
+the configuration to some degree. This also allows some schema
+"type-checking", and for default self-substitution, e.g. in
+`Profile.dataset` above.
+
+So rather than a `.bind()` with a half-dozen substituted variables, you
+can bind a single object that has sensible defaults stored in a single
+place.
+
+Configuration File Writing Tips And Best Practices
+--------------------------------------------------
+
+### Use As Few .aurora Files As Possible
+
+When creating your `.aurora` configuration, try to keep all versions of
+a particular job within the same `.aurora` file. For example, if you
+have separate jobs for `cluster1`, `cluster1` staging, `cluster1`
+testing, and`cluster2`, keep them as close together as possible.
+
+Constructs shared across multiple jobs owned by your team (e.g.
+team-level defaults or structural templates) can be split into separate
+`.aurora`files and included via the `include` directive.
+
+### Avoid Boilerplate
+
+If you see repetition or find yourself copy and pasting any parts of
+your configuration, it's likely an opportunity for templating. Take the
+example below:
+
+`redundant.aurora` contains:
+
+    download = Process(
+      name = 'download',
+      cmdline = 'wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tar.bz2',
+      max_failures = 5,
+      min_duration = 1)
+
+    unpack = Process(
+      name = 'unpack',
+      cmdline = 'rm -rf Python-2.7.3 && tar xzf Python-2.7.3.tar.bz2',
+      max_failures = 5,
+      min_duration = 1)
+
+    build = Process(
+      name = 'build',
+      cmdline = 'pushd Python-2.7.3 && ./configure && make && popd',
+      max_failures = 1)
+
+    email = Process(
+      name = 'email',
+      cmdline = 'echo Success | mail feynman@tmc.com',
+      max_failures = 5,
+      min_duration = 1)
+
+    build_python = Task(
+      name = 'build_python',
+      processes = [download, unpack, build, email],
+      constraints = [Constraint(order = ['download', 'unpack', 'build', 'email'])])
+
+As you'll notice, there's a lot of repetition in the `Process`
+definitions. For example, almost every process sets a `max_failures`
+limit to 5 and a `min_duration` to 1. This is an opportunity for factoring
+into a common process template.
+
+Furthermore, the Python version is repeated everywhere. This can be
+bound via structural templating as described in the [Advanced Binding](#AdvancedBinding)
+section.
+
+`less_redundant.aurora` contains:
+
+    class Python(Struct):
+      version = Required(String)
+      base = Default(String, 'Python-{{version}}')
+      package = Default(String, '{{base}}.tar.bz2')
+
+    ReliableProcess = Process(
+      max_failures = 5,
+      min_duration = 1)
+
+    download = ReliableProcess(
+      name = 'download',
+      cmdline = 'wget http://www.python.org/ftp/python/{{python.version}}/{{python.package}}')
+
+    unpack = ReliableProcess(
+      name = 'unpack',
+      cmdline = 'rm -rf {{python.base}} && tar xzf {{python.package}}')
+
+    build = ReliableProcess(
+      name = 'build',
+      cmdline = 'pushd {{python.base}} && ./configure && make && popd',
+      max_failures = 1)
+
+    email = ReliableProcess(
+      name = 'email',
+      cmdline = 'echo Success | mail {{role}}@foocorp.com')
+
+    build_python = SequentialTask(
+      name = 'build_python',
+      processes = [download, unpack, build, email]).bind(python = Python(version = "2.7.3"))
+
+### Thermos Uses bash, But Thermos Is Not bash
+
+#### Bad
+
+Many tiny Processes makes for harder to manage configurations.
+
+    copy = Process(
+      name = 'copy',
+      cmdline = 'rcp user@my_machine:my_application .'
+     )
+
+     unpack = Process(
+       name = 'unpack',
+       cmdline = 'unzip app.zip'
+     )
+
+     remove = Process(
+       name = 'remove',
+       cmdline = 'rm -f app.zip'
+     )
+
+     run = Process(
+       name = 'app',
+       cmdline = 'java -jar app.jar'
+     )
+
+     run_task = Task(
+       processes = [copy, unpack, remove, run],
+       constraints = order(copy, unpack, remove, run)
+     )
+
+#### Good
+
+Each `cmdline` runs in a bash subshell, so you have the full power of
+bash. Chaining commands with `&&` or `||` is almost always the right
+thing to do.
+
+Also for Tasks that are simply a list of processes that run one after
+another, consider using the `SequentialTask` helper which applies a
+linear ordering constraint for you.
+
+    stage = Process(
+      name = 'stage',
+      cmdline = 'rcp user@my_machine:my_application . && unzip app.zip && rm -f app.zip')
+
+    run = Process(name = 'app', cmdline = 'java -jar app.jar')
+
+    run_task = SequentialTask(processes = [stage, run])
+
+### Rarely Use Functions In Your Configurations
+
+90% of the time you define a function in a `.aurora` file, you're
+probably Doing It Wrong(TM).
+
+#### Bad
+
+    def get_my_task(name, user, cpu, ram, disk):
+      return Task(
+        name = name,
+        user = user,
+        processes = [STAGE_PROCESS, RUN_PROCESS],
+        constraints = order(STAGE_PROCESS, RUN_PROCESS),
+        resources = Resources(cpu = cpu, ram = ram, disk = disk)
+     )
+
+     task_one = get_my_task('task_one', 'feynman', 1.0, 32*MB, 1*GB)
+     task_two = get_my_task('task_two', 'feynman', 2.0, 64*MB, 1*GB)
+
+#### Good
+
+This one is more idiomatic. Forced keyword arguments prevents accidents,
+e.g. constructing a task with "32*MB" when you mean 32MB of ram and not
+disk. Less proliferation of task-construction techniques means
+easier-to-read, quicker-to-understand, and a more composable
+configuration.
+
+    TASK_TEMPLATE = SequentialTask(
+      user = 'wickman',
+      processes = [STAGE_PROCESS, RUN_PROCESS],
+    )
+
+    task_one = TASK_TEMPLATE(
+      name = 'task_one',
+      resources = Resources(cpu = 1.0, ram = 32*MB, disk = 1*GB) )
+
+    task_two = TASK_TEMPLATE(
+      name = 'task_two',
+      resources = Resources(cpu = 2.0, ram = 64*MB, disk = 1*GB)
+    )

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/94b723b0/docs/configurationreference.md
----------------------------------------------------------------------
diff --git a/docs/configurationreference.md b/docs/configurationreference.md
deleted file mode 100644
index 9d5c340..0000000
--- a/docs/configurationreference.md
+++ /dev/null
@@ -1,735 +0,0 @@
-Aurora + Thermos Configuration Reference
-========================================
-
-- [Aurora + Thermos Configuration Reference](#aurora--thermos-configuration-reference)
-- [Introduction](#introduction)
-- [Process Schema](#process-schema)
-    - [Process Objects](#process-objects)
-      - [name](#name)
-      - [cmdline](#cmdline)
-      - [max_failures](#max_failures)
-      - [daemon](#daemon)
-      - [ephemeral](#ephemeral)
-      - [min_duration](#min_duration)
-      - [final](#final)
-- [Task Schema](#task-schema)
-    - [Task Object](#task-object)
-      - [name](#name-1)
-      - [processes](#processes)
-        - [constraints](#constraints)
-      - [resources](#resources)
-      - [max_failures](#max_failures-1)
-      - [max_concurrency](#max_concurrency)
-      - [finalization_wait](#finalization_wait)
-    - [Constraint Object](#constraint-object)
-    - [Resource Object](#resource-object)
-- [Job Schema](#job-schema)
-    - [Job Objects](#job-objects)
-    - [Services](#services)
-    - [UpdateConfig Objects](#updateconfig-objects)
-    - [HealthCheckConfig Objects](#healthcheckconfig-objects)
-- [Specifying Scheduling Constraints](#specifying-scheduling-constraints)
-- [Template Namespaces](#template-namespaces)
-    - [mesos Namespace](#mesos-namespace)
-    - [thermos Namespace](#thermos-namespace)
-- [Basic Examples](#basic-examples)
-    - [hello_world.aurora](#hello_worldaurora)
-    - [Environment Tailoring](#environment-tailoring)
-      - [hello_world_productionized.aurora](#hello_world_productionizedaurora)
-
-Introduction
-============
-
-Don't know where to start? The Aurora configuration schema is very
-powerful, and configurations can become quite complex for advanced use
-cases.
-
-For examples of simple configurations to get something up and running
-quickly, check out the [Tutorial](tutorial.md). When you feel comfortable with the basics, move
-on to the [Configuration Tutorial](configurationtutorial.md) for more in-depth coverage of
-configuration design.
-
-For additional basic configuration examples, see [the end of this document](#BasicExamples).
-
-Process Schema
-==============
-
-Process objects consist of required `name` and `cmdline` attributes. You can customize Process
-behavior with its optional attributes. Remember, Processes are handled by Thermos.
-
-### Process Objects
-
-<table border=1>
-  <tbody>
-  <tr>
-   <th>Attribute Name</td>
-    <th>Type</td>
-    <th>Description</td>
-  </tr>
-  <tr>
-    <td><code><b>name</b></code></td>
-    <td>String</td>
-    <td>Process name (Required)</td>
-  </tr>
-  <tr>
-    <td><code>cmdline</code></td>
-    <td>String</td>
-    <td>Command line (Required)</td>
-  </tr>
-  <tr>
-    <td><code>max_failures</code></td>
-    <td>Integer</td>
-    <td>Maximum process failures (Default: 1)</td>
-  </tr>
-  <tr>
-    <td><code>daemon</code></td>
-    <td>Boolean</td>
-    <td>When True, this is a daemon process. (Default: False)</td>
-  </tr>
-  <tr>
-    <td><code>ephemeral</code></td>
-    <td>Boolean</td>
-    <td>When True, this is an ephemeral process. (Default: False)</td>
-  </tr>
-    <td><code>min_duration</code></td>
-    <td>Integer</td>
-    <td>Minimum duration between process restarts in seconds. (Default: 15)</td>
-  </tr>
-  <tr>
-    <td><code>final</code></td>
-    <td>Boolean</td>
-    <td>When True, this process is a finalizing one that should run last. (Default: False)</td>
-  </tr>
-</tbody>
-</table>
-
-#### name
-
-The name is any valid UNIX filename string (specifically no
-slashes, NULLs or leading periods). Within a Task object, each Process name
-must be unique.
-
-#### cmdline
-
-The command line run by the process. The command line is invoked in a bash
-subshell, so can involve fully-blown bash scripts. However, nothing is
-supplied for command-line arguments so `$*` is unspecified.
-
-#### max_failures
-
-The maximum number of failures (non-zero exit statuses) this process can
-have before being marked permanently failed and not retried. If a
-process permanently fails, Thermos looks at the failure limit of the task
-containing the process (usually 1) to determine if the task has
-failed as well.
-
-Setting `max_failures` to 0 makes the process retry
-indefinitely until it achieves a successful (zero) exit status.
-It retries at most once every `min_duration` seconds to prevent
-an effective denial of service attack on the coordinating Thermos scheduler.
-
-#### daemon
-
-By default, Thermos processes are non-daemon. If `daemon` is set to True, a
-successful (zero) exit status does not prevent future process runs.
-Instead, the process reinvokes after `min_duration` seconds.
-However, the maximum failure limit still applies. A combination of
-`daemon=True` and `max_failures=0` causes a process to retry
-indefinitely regardless of exit status. This should be avoided
-for very short-lived processes because of the accumulation of
-checkpointed state for each process run. When running in Mesos
-specifically, `max_failures` is capped at 100.
-
-#### ephemeral
-
-By default, Thermos processes are non-ephemeral. If `ephemeral` is set to
-True, the process' status is not used to determine if its containing task
-has completed. For example, consider a task with a non-ephemeral
-webserver process and an ephemeral logsaver process
-that periodically checkpoints its log files to a centralized data store.
-The task is considered finished once the webserver process has
-completed, regardless of the logsaver's current status.
-
-#### min_duration
-
-Processes may succeed or fail multiple times during a single task's
-duration. Each of these is called a *process run*. `min_duration` is
-the minimum number of seconds the scheduler waits before running the
-same process.
-
-#### final
-
-Processes can be grouped into two classes: ordinary processes and
-finalizing processes. By default, Thermos processes are ordinary. They
-run as long as the task is considered healthy (i.e., no failure
-limits have been reached.) But once all regular Thermos processes
-finish or the task reaches a certain failure threshold, it
-moves into a "finalization" stage and runs all finalizing
-processes. These are typically processes necessary for cleaning up the
-task, such as log checkpointers, or perhaps e-mail notifications that
-the task completed.
-
-Finalizing processes may not depend upon ordinary processes or
-vice-versa, however finalizing processes may depend upon other
-finalizing processes and otherwise run as a typical process
-schedule.
-
-Task Schema
-===========
-
-Tasks fundamentally consist of a `name` and a list of Process objects stored as the
-value of the `processes` attribute. Processes can be further constrained with
-`constraints`. By default, `name`'s value inherits from the first Process in the
-`processes` list, so for simple `Task` objects with one Process, `name`
-can be omitted. In Mesos, `resources` is also required.
-
-### Task Object
-
-<table border=1>
-  <tbody>
-  <tr>
-   <th>param</td>
-    <th>type</td>
-    <th>description</td>
-  </tr>
-  <tr>
-    <td><code><b>name</b></code></td>
-    <td>String</td>
-    <td>Process name (Required) (Default: <code>{{processes0.name}})</td>
-  </tr>
-  <tr>
-    <td><code><b>processes<b></code></td>
-    <td>List of <code>Process</code> objects</td>
-    <td>List of <code>Process</code> objects bound to this task. (Required)</td>
-  </tr>
-  <tr>
-    <td><code>constraints</code></td>
-    <td>List of <code>Constraint<code> objects</td>
-    <td>List of <code>Constraint</code> objects constraining processes.</td>
-  </tr>
-  <tr>
-    <td><code><b>resources</b></code></td>
-    <td><code>Resource</code> object</td>
-    <td>Resource footprint. (Required)</td>
-  </tr>
-  <tr>
-    <td><code>max_failures</code></td>
-    <td>Integer</td>
-    <td>Maximum process failures before being considered failed (Default: 1)</td>
-  </tr>
-    <td><code>max_concurrency</code></td>
-    <td>Integer</td>
-    <td>Maximum number of concurrent processes (Default: 0, unlimited concurrency.)
-  </tr>
-  <tr>
-    <td><code>finalization_wait</code></td>
-    <td>Integer</td>
-    <td>Amount of time allocated for finalizing processes, in seconds. (Default: 30)
-  </tr>
-</tbody>
-</table>
-
-#### name
-
-`name` is a string denoting the name of this task. It defaults to the name of the first Process in
-the list of Processes associated with the `processes` attribute.
-
-#### processes
-
-`processes` is an unordered list of `Process` objects. To constrain the order
-in which they run, use `constraints`.
-
-##### constraints
-
-A list of `Constraint` objects. Currently it supports only one type,
-the `order` constraint. `order` is a list of process names
-that should run in the order given. For example,
-
-        process = Process(cmdline = "echo hello {{name}}")
-        task = Task(name = "echoes",
-                    processes = [process(name = "jim"), process(name = "bob")],
-                    constraints = [Constraint(order = ["jim", "bob"]))
-
-Constraints can be supplied ad-hoc and in duplicate. Not all
-Processes need be constrained, however Tasks with cycles are
-rejected by the Thermos scheduler.
-
-Use the `order` function as shorthand to generate `Constraint` lists.
-The following:
-
-        order(process1, process2)
-
-is shorthand for
-
-        [Constraint(order = [process1.name(), process2.name()])]
-
-#### resources
-
-Takes a `Resource` object, which specifies the amounts of CPU, memory, and disk space resources
-to allocate to the Task.
-
-#### max_failures
-
-`max_failures` is the number of times processes that are part of this
-Task can fail before the entire Task is marked for failure.
-
-For example:
-
-        template = Process(max_failures=10)
-        task = Task(
-          name = "fail",
-          processes = [
-             template(name = "failing", cmdline = "exit 1"),
-             template(name = "succeeding", cmdline = "exit 0")
-          ],
-          max_failures=2)
-
-The `failing` Process could fail 10 times before being marked as
-permanently failed, and the `succeeding` Process would succeed on the
-first run. The task would succeed despite only allowing for two failed
-processes. To be more specific, there would be 10 failed process runs
-yet 1 failed process.
-
-#### max_concurrency
-
-For Tasks with a number of expensive but otherwise independent
-processes, you may want to limit the amount of concurrency
-the Thermos scheduler provides rather than artificially constraining
-it via `order` constraints. For example, a test framework may
-generate a task with 100 test run processes, but wants to run it on
-a machine with only 4 cores. You can limit the amount of parallelism to
-4 by setting `max_concurrency=4` in your task configuration.
-
-For example, the following task spawns 180 Processes ("mappers")
-to compute individual elements of a 180 degree sine table, all dependent
-upon one final Process ("reducer") to tabulate the results:
-
-    def make_mapper(id):
-      return Process(
-        name = "mapper%03d" % id,
-        cmdline = "echo 'scale=50;s(%d\*4\*a(1)/180)' | bc -l >
-                   temp.sine_table.%03d" % (id, id))
-
-    def make_reducer():
-      return Process(name = "reducer", cmdline = "cat temp.\* | nl \> sine\_table.txt
-                     && rm -f temp.\*")
-
-    processes = map(make_mapper, range(180))
-
-    task = Task(
-      name = "mapreduce",
-      processes = processes + [make\_reducer()],
-      constraints = [Constraint(order = [mapper.name(), 'reducer']) for mapper
-                     in processes],
-      max_concurrency = 8)
-
-#### finalization_wait
-
-Tasks have three active stages: `ACTIVE`, `CLEANING`, and `FINALIZING`. The
-`ACTIVE` stage is when ordinary processes run. This stage lasts as
-long as Processes are running and the Task is healthy. The moment either
-all Processes have finished successfully or the Task has reached a
-maximum Process failure limit, it goes into `CLEANING` stage and send
-SIGTERMs to all currently running Processes and their process trees.
-Once all Processes have terminated, the Task goes into `FINALIZING` stage
-and invokes the schedule of all Processes with the "final" attribute set to True.
-
-This whole process from the end of `ACTIVE` stage to the end of `FINALIZING`
-must happen within `finalization_wait` seconds. If it does not
-finish during that time, all remaining Processes are sent SIGKILLs
-(or if they depend upon uncompleted Processes, are
-never invoked.)
-
-Client applications with higher priority may force a shorter
-finalization wait (e.g. through parameters to `thermos kill`), so this
-is mostly a best-effort signal.
-
-### Constraint Object
-
-Current constraint objects only support a single ordering constraint, `order`,
-which specifies its processes run sequentially in the order given. By
-default, all processes run in parallel when bound to a `Task` without
-ordering constraints.
-
-<table border=1>
-  <tbody>
-  <tr>
-   <th>param</td>
-    <th>type</td>
-    <th>description</td>
-  </tr>
-  <tr>
-    <td><code><b>order</b></code></td>
-    <td>List of String</td>
-    <td> List of processes by name (String) that should be run serially.</td>
-  </tr>
-  </tbody>
-</table>
-
-### Resource Object
-
-Specifies the amount of CPU, Ram, and disk resources the task needs. See the
-[Resource Isolation document](resourceisolation.md) for suggested values and to understand how
-resources are allocated.
-
-<table border=1>
-  <tbody>
-  <tr>
-   <th>param</td>
-    <th>type</td>
-    <th>description</td>
-  </tr>
-  <tr>
-    <td><code>cpu</code></td>
-    <td>Float</td>
-    <td>Fractional number of cores required by the task.</td>
-  </tr>
-  <tr>
-    <td><code>ram</code></td>
-    <td>Integer</td>
-    <td>Bytes of RAM required by the task.</td>
-  </tr>
-  <tr>
-    <td><code>disk</code></td>
-    <td>Integer</td>
-    <td>Bytes of disk required by the task.</td>
-  </tr>
-  </tbody>
-</table>
-
-Job Schema
-==========
-
-### Job Objects
-
-<table border=1>
-  <tbody>
-    <tr>
-      <td><code>task</code></td>
-      <td>Task</td>
-      <td>The Task object to bind to this job. Required.</td></td>
-    </tr>
-    <tr>
-      <td><code>name</code></td>
-      <td>String</td>
-      <td>Job name. (Default: inherited from the task attribute's name)</td>
-    </tr>
-    <tr>
-      <td><code>role</code></td>
-      <td>String</td>
-      <td>Job role account. Required.</td>
-    </tr>
-    <tr>
-      <td><code>cluster</code></td>
-      <td>String</td>
-      <td>Cluster in which this job is scheduled. Required.</td>
-    </tr>
-    <tr>
-      <td><code>environment</code></td></td>
-      <td>String</td></td>
-      <td>Job environment, default <code>devel</code>. Must be one of <code>prod</code>, <code>devel</code>, <code>test</code> or <code>staging&lt;number&gt;</code>.</td>
-    </tr>
-    <tr>
-      <td><code>contact</code></td>
-      <td>String</td>
-      <td>Best email address to reach the owner of the job. For production jobs, this is usually a team mailing list.</td>
-    </tr>
-    <tr>
-      <td><code>instances</code></td>
-      <td>Integer</td>
-      <td>Number of instances (sometimes referred to as replicas or shards) of the task to create. (Default: 1)</td>
-    </tr>
-    <tr>
-      <td><code>cron_schedule</code> <strong>(Present, but not supported and a no-op)</strong></td>
-      <td>String</td>
-      <td>UTC Cron schedule in cron format. May only be used with non-service jobs. Default: None (not a cron job.)</td>
-    </tr>
-    <tr>
-      <td><code>cron_collision_policy</code> <strong>(Present, but not supported and a no-op)</strong></td>
-      <td>String</td>
-      <td>Policy to use when a cron job is triggered while a previous run is still active. KILL_EXISTING Kill the previous run, and schedule the new run CANCEL_NEW Let the previous run continue, and cancel the new run. RUN_OVERLAP Let the previous run continue, and schedule the new run. (Default: KILL_EXISTING)</td>
-    </tr>
-    <tr>
-      <td><code>update_config</code></td>
-      <td><code>update_config</code> object</td>
-      <td>Parameters for controlling the rate and policy of rolling updates. </td>
-    </tr>
-    <tr>
-      <td><code>constraints</code></td>
-      <td>dict</td>
-      <td>Scheduling constraints for the tasks. See the section on the <a href="#SchedulingConstraints">constraint specification language</a></td>
-    </tr>
-    <tr>
-      <td><code>service</code></td>
-      <td>Boolean</td>
-      <td>If True, restart tasks regardless of success or failure. (Default: False)</td>
-    </tr>
-    <tr>
-      <td><code>daemon</code></td>
-      <td>Boolean</td>
-      <td>A DEPRECATED alias for "service". (Default: False) </td>
-    </tr>
-    <tr>
-      <td><code>max_task_failures</code></td>
-      <td>Integer</td>
-      <td>Maximum number of failures after which the task is considered to have failed (Default: 1) Set to -1 to allow for infinite failures</td>
-    </tr>
-    <tr>
-      <td><code>priority</code></td>
-      <td>Integer</td>
-      <td>Preemption priority to give the task (Default 0). Tasks with higher priorities may preempt tasks at lower priorities.</td>
-    </tr>
-    <tr>
-      <td><code>production</code></td>
-      <td>Boolean</td>
-      <td>Whether or not this is a production task backed by quota (Default: False) Production jobs may preempt any non-production job, and may only be preempted by production jobs in the same role and of higher priority. To run jobs at this level, the job role must have the appropriate quota.</td>
-    </tr>
-    <tr>
-      <td><code>health_check_config</code></td>
-      <td><code>heath_check_config</code> object</td>
-      <td>Parameters for controlling a task's health checks via HTTP. Only used if a  health port was assigned with a command line wildcard.</td>
-    </tr>
-  </tbody>
-</table>
-
-### Services
-
-Jobs with the `service` flag set to True are called Services. The `Service`
-alias can be used as shorthand for `Job` with `service=True`.
-Services are differentiated from non-service Jobs in that tasks
-always restart on completion, whether successful or unsuccessful.
-Jobs without the service bit set only restart up to
-`max_task_failures` times and only if they terminated unsuccessfully
-either due to human error or machine failure.
-
-### UpdateConfig Objects
-
-Parameters for controlling the rate and policy of rolling updates.
-
-<table border=1>
-  <tbody>
-    <tr>
-      <td><code>batch_size</code></td>
-      <td>Integer</td>
-      <td>Maximum number of shards to be updated in one iteration (Default: 1)</td>
-    </tr>
-    <tr>
-      <td><code>restart_threshold</code></td>
-      <td>Integer</td>
-      <td>Maximum number of seconds before a shard must move into the <code>RUNNING</code> state before considered a failure (Default: 60)</td>
-    </tr>
-    <tr>
-      <td><code>watch_secs</code></td>
-      <td>Integer</td>
-      <td>Minimum number of seconds a shard must remain in <code>RUNNING</code> state before considered a success (Default: 30)</td>
-    </tr>
-    <tr>
-      <td><code>max_per_shard_failures</code></td>
-      <td>Integer</td>
-      <td>Maximum number of restarts per shard during update. Increments total failure count when this limit is exceeded. (Default: 0)</td>
-    </tr>
-    <tr>
-      <td><code>max_total_failures</code></td>
-      <td>Integer</td>
-      <td>Maximum number of shard failures to be tolerated in total during an update. Cannot be greater than or equal to the total number of tasks in a job. (Default: 0)</td>
-    </tr>
-  </tbody>
-</table>
-
-### HealthCheckConfig Objects
-
-Parameters for controlling a task's health checks via HTTP.
-
-<table border=1>
-  <tbody>
-    <tr>
-      <td><code>initial_interval_secs</code></td>
-      <td>Integer</td>
-      <td>Initial delay for performing an HTTP health check. (Default: 60)</td>
-    </tr>
-    <tr>
-      <td><code>interval_secs</code></td>
-      <td>Integer</td>
-      <td>Interval on which to check the task's health via HTTP. (Default: 30)</td>
-    </tr>
-    <tr>
-      <td><code>timeout_secs</code></td>
-      <td>Integer</td>
-      <td>HTTP request timeout. (Default: 1)</td>
-    </tr>
-    <tr>
-      <td><code>max_consecutive_failures</code></td>
-      <td>Integer</td>
-      <td>Maximum number of consecutive failures that tolerated before considering a task unhealthy (Default: 0)</td>
-     </tr>
-    </tbody>
-</table>
-
-Specifying Scheduling Constraints
-=================================
-
-Most users will not need to specify constraints explicitly, as the
-scheduler automatically inserts reasonable defaults that attempt to
-ensure reliability without impacting schedulability. For example, the
-scheduler inserts a `host: limit:1` constraint, ensuring
-that your shards run on different physical machines. Please do not
-set this field unless you are sure of what you are doing.
-
-In the `Job` object there is a map `constraints` from String to String
-allowing the user to tailor the schedulability of tasks within the job.
-
-Each slave in the cluster is assigned a set of string-valued
-key/value pairs called attributes. For example, consider the host
-`cluster1-aaa-03-sr2` and its following attributes (given in key:value
-format): `host:cluster1-aaa-03-sr2` and `rack:aaa`.
-
-The constraint map's key value is the attribute name in which we
-constrain Tasks within our Job. The value is how we constrain them.
-There are two types of constraints: *limit constraints* and *value
-constraints*.
-
-<table border=1>
-  <tbody>
-    <tr>
-      <td>Limit Constraint</td>
-      <td>A string that specifies a limit for a constraint. Starts with <code>'limit:</code> followed by an Integer and closing single quote, such as <code>'limit:1'</code>.</td>
-    </tr>
-    <tr>
-      <td>Value Constraint</td>
-      <td>A string that specifies a value for a constraint. To include a list of values, separate the values using commas. To negate the values of a constraint, start with a <code>!</code><code>.</code></td>
-    </tr>
-  </tbody>
-</table>
-
-You can also control machine diversity using constraints. The below
-constraint ensures that no more than two instances of your job may run
-on a single host. Think of this as a "group by" limit.
-
-    constraints = {
-      'host': 'limit:2',
-    }
-
-Likewise, you can use constraints to control rack diversity, e.g. at
-most one task per rack:
-
-    constraints = {
-      'rack': 'limit:1',
-    }
-
-Use these constraints sparingly as they can dramatically reduce Tasks' schedulability.
-
-Template Namespaces
-===================
-
-Currently, a few Pystachio namespaces have special semantics. Using them
-in your configuration allow you to tailor application behavior
-through environment introspection or interact in special ways with the
-Aurora client or Aurora-provided services.
-
-### mesos Namespace
-
-The `mesos` namespace contains the `instance` variable that can be used
-to distinguish between Task replicas.
-
-<table border=1>
-  <tbody>
-    <tr>
-      <td><code>instance</code></td>
-      <td>Integer</td>
-      <td>The instance number of the created task. A job with 5 replicas has instance numbers 0, 1, 2, 3, and 4.</td>
-    </tr>
-  </tbody>
-</table>
-
-### thermos Namespace
-
-The `thermos` namespace contains variables that work directly on the
-Thermos platform in addition to Aurora. This namespace is fully
-compatible with Tasks invoked via the `thermos` CLI.
-
-<table border=1>
-  <tbody>
-    <tr>
-      <td><code>ports</code></td>
-      <td>map of string to Integer</td>
-      <td>A map of names to port numbers</td>
-    </tr>
-    <tr>
-      <td><code>task_id</code></td>
-      <td>string</td>
-      <td>The task ID assigned to this task.</td>
-    </tr>
-  </tbody>
-</table>
-
-The `thermos.ports` namespace is automatically populated by Aurora when
-invoking tasks on Mesos. When running the `thermos` command directly,
-these ports must be explicitly mapped with the `-P` option.
-
-For example, if '{{`thermos.ports[http]`}}' is specified in a `Process`
-configuration, it is automatically extracted and auto-populated by
-Aurora, but must be specified with, for example, `thermos -P http:12345`
-to map `http` to port 12345 when running via the CLI.
-
-Basic Examples
-==============
-
-These are provided to give a basic understanding of simple Aurora jobs.
-
-### hello_world.aurora
-
-Put the following in a file named `hello_world.aurora`, substituting your own values
-for values such as `cluster`s.
-
-    import os
-    hello_world_process = Process(name = 'hello_world', cmdline = 'echo hello world')
-
-    hello_world_task = Task(
-      resources = Resources(cpu = 0.1, ram = 16 * MB, disk = 16 * MB),
-      processes = [hello_world_process])
-
-    hello_world_job = Job(
-      cluster = 'cluster1',
-      role = os.getenv('USER'),
-      task = hello_world_task)
-
-    jobs = [hello_world_job]
-
-Then issue the following commands to create and kill the job, using your own values for the job key.
-
-    aurora create cluster1/$USER/test/hello_world hello_world.aurora
-
-    aurora kill cluster1/$USER/test/hello_world
-
-### Environment Tailoring
-
-#### hello_world_productionized.aurora
-
-Put the following in a file named `hello_world_productionized.aurora`, substituting your own values
-for values such as `cluster`s.
-
-    include('hello_world.aurora')
-
-    production_resources = Resources(cpu = 1.0, ram = 512 * MB, disk = 2 * GB)
-    staging_resources = Resources(cpu = 0.1, ram = 32 * MB, disk = 512 * MB)
-    hello_world_template = hello_world(
-        name = "hello_world-{{cluster}}"
-        task = hello_world(resources=production_resources))
-
-    jobs = [
-      # production jobs
-      hello_world_template(cluster = 'cluster1', instances = 25),
-      hello_world_template(cluster = 'cluster2', instances = 15),
-
-      # staging jobs
-      hello_world_template(
-        cluster = 'local',
-        instances = 1,
-        task = hello_world(resources=staging_resources)),
-    ]
-
-Then issue the following commands to create and kill the job, using your own values for the job key
-
-    aurora create cluster1/$USER/test/hello_world-cluster1 hello_world_productionized.aurora
-
-    aurora kill cluster1/$USER/test/hello_world-cluster1