You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@aurora.apache.org by ke...@apache.org on 2014/01/17 02:44:43 UTC

[3/3] git commit: Initial import of Aurora documentation.

Initial import of Aurora documentation.

Bugs closed: AURORA-47

Reviewed at https://reviews.apache.org/r/16615/


Project: http://git-wip-us.apache.org/repos/asf/incubator-aurora/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-aurora/commit/915977e4
Tree: http://git-wip-us.apache.org/repos/asf/incubator-aurora/tree/915977e4
Diff: http://git-wip-us.apache.org/repos/asf/incubator-aurora/diff/915977e4

Branch: refs/heads/master
Commit: 915977e4978a527c0133cb6f1fba350d11ce464c
Parents: 35fcc54
Author: Tom Galloway <tg...@twitter.com>
Authored: Thu Jan 16 17:36:24 2014 -0800
Committer: Kevin Sweeney <ke...@apache.org>
Committed: Thu Jan 16 17:43:35 2014 -0800

----------------------------------------------------------------------
 docs/README.md                   |   20 +
 docs/clientcommands.md           |  457 +++++++++++++
 docs/configurationreference.md   |  718 +++++++++++++++++++++
 docs/configurationtutorial.md    | 1142 +++++++++++++++++++++++++++++++++
 docs/hooks.md                    |  264 ++++++++
 docs/images/CPUavailability.png  |  Bin 0 -> 26371 bytes
 docs/images/HelloWorldJob.png    |  Bin 0 -> 44392 bytes
 docs/images/RoleJobs.png         |  Bin 0 -> 59734 bytes
 docs/images/ScheduledJobs.png    |  Bin 0 -> 40758 bytes
 docs/images/TaskBreakdown.png    |  Bin 0 -> 89794 bytes
 docs/images/aurora_hierarchy.png |  Bin 0 -> 40856 bytes
 docs/images/killedtask.png       |  Bin 0 -> 73312 bytes
 docs/images/lifeofatask.png      |  Bin 0 -> 62645 bytes
 docs/images/runningtask.png      |  Bin 0 -> 58821 bytes
 docs/images/stderr.png           |  Bin 0 -> 16176 bytes
 docs/images/stdout.png           |  Bin 0 -> 32941 bytes
 docs/resourceisolation.md        |  149 +++++
 docs/tutorial.md                 |  251 ++++++++
 docs/userguide.md                |  289 +++++++++
 19 files changed, 3290 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/915977e4/docs/README.md
----------------------------------------------------------------------
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000..b670c04
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,20 @@
+# Overview
+
+*Aurora* is a service scheduler that schedules jobs onto *Mesos*, which runs tasks at a specified cluster. Typical services consist of up to hundreds of task replicas.
+
+Aurora provides a *Job* abstraction consisting of a *Task* template and instructions for creating near-identical replicas of that Task (modulo things like "instance id" or specific port numbers which may differ from machine to machine).
+
+*Terminology Note*: *Replicas* are also referred to as *shards* and *instances*. While there is a general desire to move to using "instances", "shard" is still found in commands and help strings.
+
+Typically a Task is a single *Process* corresponding to a single command line, such as `python2.6 my_script.py`. However, sometimes you must colocate separate Processes together within a single Task, which runs within a single container and `chroot`, often referred to as a "sandbox". For example, if you run multiple cooperating agents together such as `logrotate`, `installer`, and master or slave processes. *Thermos* provides a Process abstraction under the Mesos Tasks.
+
+To use and get up to speed on Aurora, you should look the docs in this directory in this order:
+
+1. How to [deploy Aurora](deploying-aurora-scheduler.md) or, how to [install Aurora on virtual machines on your private machine](vagrant.md) (the Tutorial uses the virtual machine approach).
+2. As a user, get started quickly with a [Tutorial](tutorial.md).
+3. For an overview of Aurora's process flow under the hood, see the [User Guide](userguide.md).
+4. To learn how to write a configuration file, look at our [Configuration Tutorial](configurationtutorial.md). From there, look at the [Aurora + Thermos Reference](configurationreference.md).
+5. Then read up on the [Aurora Command Line Client](clientcommands.md).
+6. Find out general information and useful tips about how Aurora does [Resource Isolation](resourceisolation.md).
+
+To contact the Aurora Developer List, email [dev@aurora.incubator.apache.org](mailto:dev@aurora.incubator.apache.org). You may want to read the list [archives](http://mail-archives.apache.org/mod_mbox/incubator-aurora-dev/). You can also use the IRC channel `#aurora` on `irc.freenode.net`

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/915977e4/docs/clientcommands.md
----------------------------------------------------------------------
diff --git a/docs/clientcommands.md b/docs/clientcommands.md
new file mode 100644
index 0000000..1470883
--- /dev/null
+++ b/docs/clientcommands.md
@@ -0,0 +1,457 @@
+Aurora Client Commands
+======================
+
+The most up-to-date reference is in the client itself: use the
+`aurora help` subcommand (for example, `aurora help` or
+`aurora help create`) to find the latest information on parameters and
+functionality. Note that `aurora help open` does not work, due to underlying issues with
+reflection.
+
+[Introduction](#Introduction)
+[Job Keys](#Job_Keys)
+[Modifying Aurora Client Commands](#Modifying)
+[Regular Jobs](#Regular)
+&nbsp;&nbsp;&nbsp;&nbsp;[Creating and Running a Job](#Creating)
+&nbsp;&nbsp;&nbsp;&nbsp;[Running a Command On a Running Job](#Running)
+&nbsp;&nbsp;&nbsp;&nbsp;[Killing a Job](#Killing)
+&nbsp;&nbsp;&nbsp;&nbsp;[Updating a Job](#Updating)
+&nbsp;&nbsp;&nbsp;&nbsp;[Renaming a Job](#Renaming)
+&nbsp;&nbsp;&nbsp;&nbsp;[Restarting Jobs](#Restarting)
+[Cron Jobs](#Cron)
+[Comparing Jobs](#Comparing)
+[Viewing/Examining Jobs](#Viewing)
+&nbsp;&nbsp;&nbsp;&nbsp;[Listing Jobs](#Listing)
+&nbsp;&nbsp;&nbsp;&nbsp;[Inspecting a Job](#Inspecting)
+&nbsp;&nbsp;&nbsp;&nbsp;[Versions](#Versions)
+&nbsp;&nbsp;&nbsp;&nbsp;[Checking Your Quota](#Checking)
+&nbsp;&nbsp;&nbsp;&nbsp;[Finding a Job on Web UI](#Finding)
+&nbsp;&nbsp;&nbsp;&nbsp;[Getting Job Status](#Status)
+&nbsp;&nbsp;&nbsp;&nbsp;[Opening the Web UI](#Opening)
+&nbsp;&nbsp;&nbsp;&nbsp;[SSHing to a Specific Task Machine](#SSHing)
+&nbsp;&nbsp;&nbsp;&nbsp;[Templating Command Arguments](#Templating)
+
+<a name="Introduction"></a>Introduction
+---------------------------------------
+
+Once you have written an `.aurora` configuration file that describes
+your Job and its parameters and functionality, you interact with Aurora
+using Aurora Client commands. This document describes all of these commands
+and how and when to use them. All Aurora Client commands start with
+`aurora`, followed by the name of the specific command and its
+arguments.
+
+*Job keys* are a very common argument to Aurora commands, as well as the
+gateway to useful information about a Job. Before using Aurora, you
+should read the next section which describes them in detail. The section
+after that briefly describes how you can modify the behavior of certain
+Aurora Client commands, linking to a detailed document about how to do
+that.
+
+This is followed by the Regular Jobs section, which describes the basic
+Client commands for creating, running, and manipulating Aurora Jobs.
+After that are sections on Comparing Jobs and Viewing/Examining Jobs. In
+other words, various commands for getting information and metadata about
+Aurora Jobs.
+
+<a name-"Job_Keys"></a>Job Keys
+-------------------------------
+
+A job key is a unique system-wide identifier for an Aurora-managed
+Job, for example `cluster1/web-team/test/experiment204`. It is a 4-tuple
+consisting of, in order, *cluster*, *role*, *environment*, and
+*jobname*, separated by /s. Cluster is the name of an Aurora
+cluster. Role is the Unix service account under which the Job
+runs. Environment is a namespace component like `devel`, `test`,
+`prod`, or `stagingN.` Jobname is the Job's name.
+
+The combination of all four values uniquely specifies the Job. If any
+one value is different from that of another job key, the two job keys
+refer to different Jobs. For example, job key
+`cluster1/tyg/prod/workhorse` is different from
+`cluster1/tyg/prod/workcamel` is different from
+`cluster2/tyg/prod/workhorse` is different from
+`cluster2/foo/prod/workhorse` is different from
+`cluster1/tyg/test/workhorse.`
+
+The list of available clusters is defined in
+`/etc/aurora/clusters.json` or `~/.clusters.json`
+
+Role names are user accounts existing on the slave machines. If you don't know what accounts
+are available, contact your sysadmin.
+
+Environment names are namespaces; you can count on `prod`, `devel` and `test` existing.
+
+<a name="Modifying"></a>Modifying Aurora Client Commands
+--------------------------------------------------------
+
+For certain Aurora Client commands, you can define hook methods that run
+either before or after an action that takes place during the command's
+execution, as well as based on whether the action finished successfully or failed
+during execution. Basically, a hook is code that lets you extend the
+command's actions. The hook executes on the client side, specifically on
+the machine executing Aurora commands.
+
+Hooks can be associated with these Aurora Client commands.
+
+  - `cancel_update`
+  - `create`
+  - `kill`
+  - `restart`
+  - `update`
+
+The process for writing and activating them is complex enough
+that we explain it in a devoted document, [Hooks for Aurora Client API](hooks.md).
+
+<a name="Regular"></a>Regular Jobs
+----------------------------------
+
+This section covers Aurora commands related to running, killing,
+renaming, updating, and restarting a basic Aurora Job.
+
+### <a name="Creating"></a>Creating and Running a Job
+
+> `aurora create <job key> <configuration file>`
+
+Creates and then runs a Job with the specified job key based on a `.aurora` configuration file.
+The configuration file may also contain and activate hook definitions.
+
+`create` can take four named parameters:
+
+-   `-E NAME=VALUE` Bind a Thermos mustache variable name to a
+    value. Multiple flags specify multiple values. Defaults to `[]`.
+-   ` -o, --open_browser` Open a browser window to the scheduler UI Job
+    page after a job changing operation happens. When `False`, the Job
+    URL prints on the console and the user has to copy/paste it
+    manually. Defaults to `False`. Does not work when running in Vagrant.
+-   ` -j, --json` If specified, configuration argument is read as a
+    string in JSON format. Defaults to False.
+-   ` --wait_until=STATE` Block the client until all the Tasks have
+    transitioned into the requested state. Possible values are: `PENDING`,
+    `RUNNING`, `FINISHED`. Default: `PENDING`
+
+### <a name="Running></a>Running a Command On a Running Job
+
+> `aurora run <job_key> <cmd>`
+
+Runs a shell command on all machines currently hosting shards of a
+single Job.
+
+`run` supports the same command line wildcards used to populate a Job's
+commands; i.e. anything in the `{{mesos.*}}` and `{{thermos.*}}`
+namespaces.
+
+`run` can take three named parameters:
+
+-   `-t NUM_THREADS`, `--threads=NUM_THREADS `The number of threads to
+    use, defaulting to `1`.
+-   `--user=SSH_USER` ssh as this user instead of the given role value.
+    Defaults to None.
+-   `-e, --executor_sandbox`  Run the command in the executor sandbox
+    instead of the Task sandbox. Defaults to False.
+
+### <a name="Killing"></a>Killing a Job
+
+> `aurora kill <job key> <configuration file>`
+
+Kills all Tasks associated with the specified Job, blocking until all
+are terminated. Defaults to killing all shards in the Job.
+
+The `<configuration file>` argument for `kill` is optional. Use it only
+if it contains hook definitions and activations that affect the
+kill command.
+
+`kill` can take two named parameters:
+
+-   `-o, --open_browser` Open a browser window to the scheduler UI Job
+    page after a job changing operation happens. When `False`, the Job
+    URL prints on the console and the user has to copy/paste it
+    manually. Defaults to `False`. Does not work when running in Vagrant.
+-   `--shards=SHARDS` A list of shard ids to act on. Can either be a
+    comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
+    combination of the two (e.g. 0-2,5,7-9). Defaults to acting on all
+    shards.
+
+### <a name="Updating"></a>Updating a Job
+
+> `aurora update [--shards=ids] <job key> <configuration file>`
+>
+> `aurora cancel_update <job key> <configuration file>`
+
+Given a running job, does a rolling update to reflect a new
+configuration version. Only updates Tasks in the Job with a changed
+configuration. You can further restrict the operated on Tasks by
+using `--shards` and specifying a comma-separated list of job shard ids.
+
+You may want to run `aurora diff` beforehand to validate which Tasks
+have different configurations.
+
+Updating jobs are locked to be sure the update finishes without
+disruption. If the update abnormally terminates, the lock may stay
+around and cause failure of subsequent update attempts.
+ `aurora cancel_update `unlocks the Job specified by
+its `job_key` argument. Be sure you don't issue `cancel_update` when
+another user is working with the specified Job.
+
+The `<configuration file>` argument for `cancel_update` is optional. Use
+it only if it contains hook definitions and activations that affect the
+`cancel_update` command. The `<configuration file>` argument for
+`update` is required, but in addition to a new configuration it can be
+used to define and activate hooks for `update`.
+
+`update` can take four named parameters:
+
+-   `--shards=SHARDS` A list of shard ids to update. Can either be a
+    comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
+    combination of the two (e.g. 0-2,5,7-9). If not  set, all shards are
+    acted on. Defaults to None.
+-   `-E NAME=VALUE` Binds a Thermos mustache variable name to a value.
+    Use multiple flags to specify multiple values. Defaults to `[]`.
+-   `-j, --json` If specified, configuration is read in JSON format.
+    Defaults to `False`.
+-   `--updater_health_check_interval_seconds=HEALTH_CHECK_INTERVAL_SECONDS`
+    Time interval between subsequent shard status checks. Defaults to
+    `3`.
+
+### <a name="Renaming"></a>Renaming a Job
+
+Renaming is a tricky operation as downstream clients must be informed of
+the new name. A conservative approach
+to renaming suitable for production services is:
+
+1.  Modify the Aurora configuration file to change the role,
+    environment, and/or name as appropriate to the standardized naming
+    scheme.
+2.  Check that only these naming components have changed
+    with `aurora diff`.
+
+        aurora diff <job_key> <job_configuration>
+
+3.  Create the (identical) job at the new key. You may need to request a
+    temporary quota increase.
+
+         aurora create <new_job_key> <job_configuration>
+
+4.  Migrate all clients over to the new job key. Update all links and
+    dashboards. Ensure that both job keys run identical versions of the
+    code while in this state.
+5.  After verifying that all clients have successfully moved over, kill
+    the old job.
+
+        aurora kill <old_job_key>
+
+6.  If you received a temporary quota increase, be sure to let the
+    powers that be know you no longer need the additional capacity.
+
+### <a name="Restarting"></a>Restarting Jobs
+
+`restart` restarts all of a job key identified Job's shards:
+
+        aurora restart <job_key> <configuration file>
+
+Restarts are controlled on the client side, so aborting
+the `restart` command halts the restart operation.
+
+`restart` does a rolling restart. You almost always want to do this, but
+not if all shards of a service are misbehaving and are
+completely dysfunctional. To not do a rolling restart, use
+the `-shards` option described below.
+
+**Note**: `restart` only applies its command line arguments and does not
+use or is affected by `update.config`. Restarting
+does ***not*** involve a configuration change. To update the
+configuration, use `update.config`.
+
+The `<configuration file>` argument for restart is optional. Use it only
+if it contains hook definitions and activations that affect the
+`restart` command.
+
+In addition to the required job key argument, there are eight
+`restart` specific optional arguments:
+
+-   `--updater_health_check_interval_seconds`: Defaults to `3`, the time
+    interval between subsequent shard status checks.
+-   `--shards=SHARDS`: Defaults to None, which restarts all shards.
+    Otherwise, only the specified-by-id shards restart. They can be
+    comma-separated `(0, 8, 9)`, a range `(3-5)` or a
+    combination `(0, 3-5, 8, 9-11)`.
+-   `--batch_size`: Defaults to `1`, the number of shards to be started
+    in one iteration. So, for example, for value 3, it tries to restart
+    the first three shards specified by `--shards` simultaneously, then
+    the next three, and so on.
+-   `--max_per_shard_failures=MAX_PER_SHARD_FAILURES`: Defaults to `0`,
+    the maximum number of restarts per shard during restart. When
+    exceeded, it increments the total failure count.
+-   `--max_total_failures=MAX_TOTAL_FAILURES`: Defaults to `0`, the
+    maximum total number of shard failures tolerated during restart.
+-   `-o, --open_browser` Open a browser window to the scheduler UI Job
+    page after a job changing operation happens. When `False`, the Job
+    url prints on the console and the user has to copy/paste it
+    manually. Defaults to `False`. Does not work when running in Vagrant.
+-   `--restart_threshold`: Defaults to `60`, the maximum number of
+    seconds before a shard must move into the `RUNNING` state before
+    it's considered a failure.
+-   `--watch_secs`: Defaults to `30`, the minimum number of seconds a
+    shard must remain in `RUNNING` state before considered a success.
+
+<a name="Cron"></a>Cron Jobs
+----------------------------
+
+You will see various commands and options relating to cron jobs in
+`aurora -help` and similar. Ignore them, as they're not yet implemented.
+You might be able to use them without causing an error, but nothing happens
+if you do.
+
+<a name="Comparing"></a>Comparing Jobs
+--------------------------------------
+
+> `aurora diff <job_key> config`
+
+Compares a job configuration against a running job. By default the diff
+is determined using `diff`, though you may choose an alternate
+ diff program by specifying the `DIFF_VIEWER` environment variable.
+
+There are two named parameters:
+
+-   `-E NAME=VALUE` Bind a Thermos mustache variable name to a
+    value. Multiple flags may be used to specify multiple values.
+    Defaults to `[]`.
+-   `-j, --json` Read the configuration argument in JSON format.
+    Defaults to `False`.
+
+<a name="Viewing"></a>Viewing/Examining Jobs
+--------------------------------------------
+
+Above we discussed creating, killing, and updating Jobs. Here we discuss
+how to view and examine Jobs.
+
+### <a name="Listing"></a>Listing Jobs
+
+> `aurora list_jobs ` \
+> Usage: `aurora list_jobs cluster/role`
+
+Lists all Jobs registered with the Aurora scheduler in the named cluster for the named role.
+
+It has two named parameters:
+
+-   `--pretty`: Displays job information in prettyprinted format.
+    Defaults to `False`.
+-   `-c`, `--show-cron`: Shows cron schedule for jobs. Defaults to
+    `False`. Do not use, as it's not yet implemented.
+
+### <a name="Inspecting"></a>Inspecting a Job
+
+> `aurora inspect <job_key> config`
+
+`inspect` verifies that its specified job can be parsed from a
+configuration file, and displays the parsed configuration. It has four
+named parameters:
+
+-   `--local`: Inspect the configuration that the  `spawn` command would
+    create, defaulting to `False`.
+-   `--raw`: Shows the raw configuration. Defaults to `False`.
+-   `-j`, `--json`: If specified, configuration is read in JSON format.
+    Defaults to `False`.
+-   `-E NAME=VALUE`: Bind a Thermos Mustache variable name to a value.
+    You can use multiple flags to specify multiple values. Defaults
+    to `[]`
+
+### <a name="Versions></a>Versions
+
+>     aurora version
+
+Lists client build information and what Aurora API version it supports.
+
+### <a name="Checking"></a>Checking Your Quota
+
+> `aurora get_quota --cluster=CLUSTER role`
+
+  Prints the production quota allocated to the role's value at the given
+cluster.
+
+### <a name="Finding"></a>Finding a Job on Web UI
+
+When you create a job, part of the output response contains a URL that goes
+to the job's scheduler UI page. For example:
+
+        vagrant@precise64:~$ aurora create example/www-data/prod/hello /vagrant/examples/jobs/hello_world.aurora
+        INFO] Creating job hello
+        INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
+        INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
+
+You can go to the scheduler UI page for this job via `http://precise64:8081/scheduler/www-data/prod/hello`
+You can go to the overall scheduler UI page by going to the part of that URL that ends at `scheduler`; `http://precise64:8081/scheduler`
+
+Once you click through to a role page, you see Jobs arranged
+separately by pending jobs, active jobs and finished jobs.
+Jobs are arranged by role, typically a service account for
+production jobs and user accounts for test or development jobs.
+
+### <a name="Status"></a>Getting Job Status
+
+> `aurora status <job_key>     `
+
+Returns the status of recent tasks associated with the
+`job_key` specified Job in its supplied cluster. Typically this includes
+a mix of active tasks (running or assigned) and inactive tasks
+(successful, failed, and lost.)
+
+### <a name="Opening"></a>Opening the Web UI
+
+Use the Job's web UI scheduler URL or the `aurora status` command to find out on which
+machines individual tasks are scheduled. You can open the web UI via the
+`open` command line command if invoked from your machine:
+
+> `aurora open [<cluster>[/<role>[/<env>/<job_name>]]]`
+
+If only the cluster is specified, it goes directly to that cluster's
+scheduler main page. If the role is specified, it goes to the top-level
+role page. If the full job key is specified, it goes directly to the job
+page where you can inspect individual tasks.
+
+### <a name="SSHing"></a>SSHing to a Specific Task Machine
+
+> `aurora ssh <job_key> <shard number>`
+
+You can have the Aurora client ssh directly to the machine that has been
+assigned a particular Job/shard number. This may be useful for quickly
+diagnosing issues such as performance issues or abnormal behavior on a
+particular machine.
+
+It can take three named parameters:
+
+-     `-e`, `--executor_sandbox`:  Run `ssh` in the executor sandbox
+    instead of the  task sandbox. Defaults to `False`.
+-   `--user=SSH_USER`: `ssh` as the given user instead of as the role in
+    the `job_key` argument. Defaults to none.
+-   `-L PORT:NAME`: Add tunnel from local port `PORT` to the remote
+    named port  `NAME`. Defaults to `[]`.
+
+### <a name="Templating"></a>Templating Command Arguments
+
+> `aurora run [-e] [-t THREADS] <job_key> -- <<command-line>> `
+
+Given a job specification, run the supplied command on all hosts and
+return the output. You may use the standard Mustache templating rules:
+
+-   {{`thermos.ports[name]`}} substitutes the specific named port of the
+    task assigned to this machine
+-   {{`mesos.instance`}} substitutes the shard id of the job's task
+    assigned to this machine
+-   {{`thermos.task_id`}} substitutes the task id of the job's task
+    assigned to this machine
+
+For example, the following type of pattern can be a powerful diagnostic
+tool:
+
+>
+>
+> aurora run -t5 cluster1/tyg/devel/seizure -- 'curl -s -m1
+> localhost:{{thermos.ports[http]}}/vars | grep uptime'
+>
+>
+
+By default, the command runs in the Task's sandbox. The `-e` option can
+run the command in the executor's sandbox. This is mostly useful for
+Aurora administrators.
+
+You can parallelize the runs by using the `-t` option.

http://git-wip-us.apache.org/repos/asf/incubator-aurora/blob/915977e4/docs/configurationreference.md
----------------------------------------------------------------------
diff --git a/docs/configurationreference.md b/docs/configurationreference.md
new file mode 100644
index 0000000..6fb6b20
--- /dev/null
+++ b/docs/configurationreference.md
@@ -0,0 +1,718 @@
+Aurora + Thermos Configuration Reference
+========================================
+
+
+[Introduction](#Introduction)
+[Process Schema](#Process Schema)
+&nbsp;&nbsp;&nbsp;&nbsp;[`Process` Objects](#ProcessObject)
+[Task Schema](#TaskSchema)
+&nbsp;&nbsp;&nbsp;&nbsp;[`Task` Object](#TaskObject)
+&nbsp;&nbsp;&nbsp;&nbsp;[`Constraint` Object](#ConstraintObject)
+&nbsp;&nbsp;&nbsp;&nbsp;[`Resource` Object](#ResourceObject)
+[Job Schema](#JobSchema)
+&nbsp;&nbsp;&nbsp;&nbsp;[`Job` Objects](#JobObject)
+&nbsp;&nbsp;&nbsp;&nbsp;[Services](#Services)
+&nbsp;&nbsp;&nbsp;&nbsp;[`UpdateConfig` Objects](#UpdateConfigObjects)
+&nbsp;&nbsp;&nbsp;&nbsp;[`HealthCheckConfig` Objects](#HealthCheckConfigObject)
+[Specifying Scheduling Constraints](#SchedulingConstraints)
+[Template Namespaces](#TemplateNamespaces)
+&nbsp;&nbsp;&nbsp;&nbsp;[`mesos` Namespace](#mesosNamespace)
+&nbsp;&nbsp;&nbsp;&nbsp;[`thermos` Namespace](#thermosNamespace)
+[Basic Examples](#BasicExamples)
+&nbsp;&nbsp;&nbsp;&nbsp;[`hello_world.aurora`](#hello_world.aurora)
+&nbsp;&nbsp;&nbsp;&nbsp;[Environment Tailoring](#EnvironmentTailoring)
+
+<a name="Introduction"></a>Introduction
+=======================================
+
+Don't know where to start? The Aurora configuration schema is very
+powerful, and configurations can become quite complex for advanced use
+cases.
+
+For examples of simple configurations to get something up and running
+quickly, check out the [Tutorial](tutorial.md). When you feel comfortable with the basics, move
+on to the [Configuration Tutorial](configurationtutorial.md) for more in-depth coverage of configuration
+design.
+
+For additional basic configuration examples, see [the end of this document](#BasicExamples).
+
+<a name="Process Schema"></a>Process Schema
+===========================================
+
+Process objects consist of required `name` and `cmdline` attributes. You can customize Process
+behavior with its optional attributes. Remember, Processes are handled by Thermos.
+
+### <a name="ProcessObject"></a> `Process` Objects
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>Attribute Name</td>
+    <th>Type</td>
+    <th>Description</td>
+  </tr>
+  <tr>
+    <td><code><b>name</b></code></td>
+    <td>String</td>
+    <td>Process name (Required)</td>
+  </tr>
+  <tr>
+    <td><code>cmdline</code></td>
+    <td>String</td>
+    <td>Command line (Required)</td>
+  </tr>
+  <tr>
+    <td><code>max_failures</code></td>
+    <td>Integer</td>
+    <td>Maximum process failures (Default: 1)</td>
+  </tr>
+  <tr>
+    <td><code>daemon</code></td>
+    <td>Boolean</td>
+    <td>When True, this is a daemon process. (Default: False)</td>
+  </tr>
+  <tr>
+    <td><code>ephemeral</code></td>
+    <td>Boolean</td>
+    <td>When True, this is an ephemeral process. (Default: False)</td>
+  </tr>
+    <td><code>min_duration</code></td>
+    <td>Integer</td>
+    <td>Minimum duration between process restarts in seconds. (Default: 15)</td>
+  </tr>
+  <tr>
+    <td><code>final</code></td>
+    <td>Boolean</td>
+    <td>When True, this process is a finalizing one that should run last. (Default: False)</td>
+  </tr>
+</tbody>
+</table>
+
+#### `name`
+
+The name is any valid UNIX filename string (specifically no
+slashes, NULLs or leading periods). Within a Task object, each Process name
+must be unique.
+
+#### `cmdline`
+
+The command line run by the process. The command line is invoked in a bash
+subshell, so can involve fully-blown bash scripts. However, nothing is
+supplied for command-line arguments so `$*` is unspecified.
+
+#### `max_failures`
+
+The maximum number of failures (non-zero exit statuses) this process can
+have before being marked permanently failed and not retried. If a
+process permanently fails, Thermos looks at the failure limit of the task
+containing the process (usually 1) to determine if the task has
+failed as well.
+
+Setting `max_failures` to 0 makes the process retry
+indefinitely until it achieves a successful (zero) exit status.
+It retries at most once every `min_duration` seconds to prevent
+an effective denial of service attack on the coordinating Thermos scheduler.
+
+#### `daemon`
+
+By default, Thermos processes are non-daemon. If `daemon` is set to True, a
+successful (zero) exit status does not prevent future process runs.
+Instead, the process reinvokes after `min_duration` seconds.
+However, the maximum failure limit still applies. A combination of
+`daemon=True` and `max_failures=0` causes a process to retry
+indefinitely regardless of exit status. This should be avoided
+for very short-lived processes because of the accumulation of
+checkpointed state for each process run. When running in Mesos
+specifically, `max_failures` is capped at 100.
+
+#### `ephemeral`
+
+By default, Thermos processes are non-ephemeral. If `ephemeral` is set to
+True, the process' status is not used to determine if its containing task
+has completed. For example, consider a task with a non-ephemeral
+webserver process and an ephemeral logsaver process
+that periodically checkpoints its log files to a centralized data store.
+The task is considered finished once the webserver process has
+completed, regardless of the logsaver's current status.
+
+#### `min_duration`
+
+Processes may succeed or fail multiple times during a single task's
+duration. Each of these is called a *process run*. `min_duration` is
+the minimum number of seconds the scheduler waits before running the
+same process.
+
+#### `final`
+
+Processes can be grouped into two classes: ordinary processes and
+finalizing processes. By default, Thermos processes are ordinary. They
+run as long as the task is considered healthy (i.e., no failure
+limits have been reached.) But once all regular Thermos processes
+finish or the task reaches a certain failure threshold, it
+moves into a "finalization" stage and runs all finalizing
+processes. These are typically processes necessary for cleaning up the
+task, such as log checkpointers, or perhaps e-mail notifications that
+the task completed.
+
+Finalizing processes may not depend upon ordinary processes or
+vice-versa, however finalizing processes may depend upon other
+finalizing processes and otherwise run as a typical process
+schedule.
+
+<a name="TaskSchema"></a>Task Schema
+====================================
+
+Tasks fundamentally consist of a `name` and a list of Process objects stored as the
+value of the `processes` attribute. Processes can be further constrained with
+`constraints`. By default, `name`'s value inherits from the first Process in the
+`processes` list, so for simple `Task` objects with one Process, `name`
+can be omitted. In Mesos, `resources` is also required.
+
+### <a name="TaskObject"></a> `Task` Object
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>param</td>
+    <th>type</td>
+    <th>description</td>
+  </tr>
+  <tr>
+    <td><code><b>name</b></code></td>
+    <td>String</td>
+    <td>Process name (Required) (Default: <code>{{processes0.name}})</td>
+  </tr>
+  <tr>
+    <td><code><b>processes<b></code></td>
+    <td>List of <code>Process</code> objects</td>
+    <td>List of <code>Process</code> objects bound to this task. (Required)</td>
+  </tr>
+  <tr>
+    <td><code>constraints</code></td>
+    <td>List of <code>Constraint<code> objects</td>
+    <td>List of <code>Constraint</code> objects constraining processes.</td>
+  </tr>
+  <tr>
+    <td><code><b>resources</b></code></td>
+    <td><code>Resource</code> object</td>
+    <td>Resource footprint. (Required)</td>
+  </tr>
+  <tr>
+    <td><code>max_failures</code></td>
+    <td>Integer</td>
+    <td>Maximum process failures before being considered failed (Default: 1)</td>
+  </tr>
+    <td><code>max_concurrency</code></td>
+    <td>Integer</td>
+    <td>Maximum number of concurrent processes (Default: 0, unlimited concurrency.)
+  </tr>
+  <tr>
+    <td><code>finalization_wait</code></td>
+    <td>Integer</td>
+    <td>Amount of time allocated for finalizing processes, in seconds. (Default: 30)
+  </tr>
+</tbody>
+</table>
+
+`name`
+
+`name` is a string denoting the name of this task. It defaults to the name of the first Process in the list of Processes associated with the `processes` attribute.
+
+`processes`
+
+`processes` is an unordered list of `Process` objects. To constrain the order
+in which they run, use `constraints`.
+
+`constraints`
+
+A list of `Constraint` objects. Currently it supports only one type,
+the `order` constraint. `order` is a list of process names
+that should run in the order given. For example,
+
+        process = Process(cmdline = "echo hello {{name}}")
+        task = Task(name = "echoes",
+                    processes = [process(name = "jim"), process(name = "bob")],
+                    constraints = [Constraint(order = ["jim", "bob"]))
+
+Constraints can be supplied ad-hoc and in duplicate. Not all
+Processes need be constrained, however Tasks with cycles are
+rejected by the Thermos scheduler.
+
+Use the `order` function as shorthand to generate `Constraint` lists.
+The following:
+
+        order(process1, process2)
+
+is shorthand for
+
+        [Constraint(order = [process1.name(), process2.name()])]
+
+`resources`
+
+Takes a `Resource` object, which specifies the amounts of CPU, memory, and disk space resources to allocate to the Task.
+
+`max_failures`
+
+`max_failures` is the number of times processes that are part of this
+Task can fail before the entire Task is marked for failure.
+
+For example:
+
+        template = Process(max_failures=10)
+        task = Task(
+          name = "fail",
+          processes = [
+             template(name = "failing", cmdline = "exit 1"),
+             template(name = "succeeding", cmdline = "exit 0")
+          ],
+          max_failures=2)
+
+The `failing` Process could fail 10 times before being marked as
+permanently failed, and the `succeeding` Process would succeed on the
+first run. The task would succeed despite only allowing for two failed
+processes. To be more specific, there would be 10 failed process runs
+yet 1 failed process.
+
+`max_concurrency`
+
+For Tasks with a number of expensive but otherwise independent
+processes, you may want to limit the amount of concurrency
+the Thermos scheduler provides rather than artificially constraining
+it via `order` constraints. For example, a test framework may
+generate a task with 100 test run processes, but wants to run it on
+a machine with only 4 cores. You can limit the amount of parallelism to
+4 by setting `max_concurrency=4` in your task configuration.
+
+For example, the following task spawns 180 Processes ("mappers")
+to compute individual elements of a 180 degree sine table, all dependent
+upon one final Process ("reducer") to tabulate the results:
+
+        def make_mapper(id):
+          return Process(
+            name = "mapper%03d" % id,
+            cmdline = "echo 'scale=50;s(%d\*4\*a(1)/180)' | bc -l >
+                       temp.sine_table.%03d" % (id, id))
+
+        def make_reducer():
+          return Process(name = "reducer", cmdline = "cat temp.\* | nl \> sine\_table.txt
+                         && rm -f temp.\*")
+
+        processes = map(make_mapper, range(180))
+
+        task = Task(
+          name = "mapreduce",
+          processes = processes + [make\_reducer()],
+          constraints = [Constraint(order = [mapper.name(), 'reducer']) for mapper
+                         in processes],
+          max_concurrency = 8)
+
+`finalization_wait`
+
+Tasks have three active stages: `ACTIVE`, `CLEANING`, and `FINALIZING`. The
+`ACTIVE` stage is when ordinary processes run. This stage lasts as
+long as Processes are running and the Task is healthy. The moment either
+all Processes have finished successfully or the Task has reached a
+maximum Process failure limit, it goes into `CLEANING` stage and send
+SIGTERMs to all currently running Processes and their process trees.
+Once all Processes have terminated, the Task goes into `FINALIZING` stage
+and invokes the schedule of all Processes with the "final" attribute set to True.
+
+This whole process from the end of `ACTIVE` stage to the end of `FINALIZING`
+must happen within `finalization_wait` seconds. If it does not
+finish during that time, all remaining Processes are sent SIGKILLs
+(or if they depend upon uncompleted Processes, are
+never invoked.)
+
+Client applications with higher priority may force a shorter
+finalization wait (e.g. through parameters to `thermos kill`), so this
+is mostly a best-effort signal.
+
+### <a name="ConstraintObject"></a> `Constraint` Object
+
+Current constraint objects only support a single ordering constraint, `order`,
+which specifies its processes run sequentially in the order given. By
+default, all processes run in parallel when bound to a `Task` without
+ordering constraints.
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>param</td>
+    <th>type</td>
+    <th>description</td>
+  </tr>
+  <tr>
+    <td><code><b>order</b></code></td>
+    <td>List of String</td>
+    <td> List of processes by name (String) that should be run serially.</td>
+  </tr>
+  </tbody>
+</table>
+
+### <a name="ResourceObject"></a> `Resource` Object
+
+Specifies the amount of CPU, Ram, and disk resources the task needs. See the [Resource Isolation document](resourceisolation.md) for suggested values and to understand how resources are allocated.
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>param</td>
+    <th>type</td>
+    <th>description</td>
+  </tr>
+  <tr>
+    <td><code>cpu</code></td>
+    <td>Float</td>
+    <td>Fractional number of cores required by the task.</td>
+  </tr>
+  <tr>
+    <td><code>ram</code></td>
+    <td>Integer</td>
+    <td>Bytes of RAM required by the task.</td>
+  </tr>
+  <tr>
+    <td><code>disk</code></td>
+    <td>Integer</td>
+    <td>Bytes of disk required by the task.</td>
+  </tr>
+  </tbody>
+</table>
+
+<a name="JobSchema"></a>Job Schema
+==================================
+
+### <a name="JobObject"></a> `Job` Objects
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>task</code></td>
+      <td>Task</td>
+      <td>The Task object to bind to this job. Required.</td></td>
+    </tr>
+    <tr>
+      <td><code>name</code></td>
+      <td>String</td>
+      <td>Job name. (Default: inherited from the task attribute's name)</td>
+    </tr>
+    <tr>
+      <td><code>role</code></td>
+      <td>String</td>
+      <td>Job role account. Required.</td>
+    </tr>
+    <tr>
+      <td><code>cluster</code></td>
+      <td>String</td>
+      <td>Cluster in which this job is scheduled. Required.</td>
+    </tr>
+    <tr>
+      <td><code>environment</code></td></td>
+      <td>String</td></td>
+      <td>Job environment, default <code>devel</code>. Must be one of <code>prod</code>, <code>devel</code>, <code>test</code> or <code>staging&lt;number&gt;</code>.</td>
+    </tr>
+    <tr>
+      <td><code>contact</code></td>
+      <td>String</td>
+      <td>Best email address to reach the owner of the job. For production jobs, this is usually a team mailing list.</td>
+    </tr>
+    <tr>
+      <td><code>instances</code></td>
+      <td>Integer</td>
+      <td>Number of instances (sometimes referred to as replicas or shards) of the task to create. (Default: 1)</td>
+    </tr>
+    <tr>
+      <td><code>cron_schedule</code> <strong>(Present, but not supported and a no-op)</strong></td>
+      <td>String</td>
+      <td>UTC Cron schedule in cron format. May only be used with non-service jobs. Default: None (not a cron job.)</td>
+    </tr>
+    <tr>
+      <td><code>cron_collision_policy</code> <strong>(Present, but not supported and a no-op)</strong></td>
+      <td>String</td>
+      <td>Policy to use when a cron job is triggered while a previous run is still active. KILL_EXISTING Kill the previous run, and schedule the new run CANCEL_NEW Let the previous run continue, and cancel the new run. RUN_OVERLAP Let the previous run continue, and schedule the new run. (Default: KILL_EXISTING)</td>
+    </tr>
+    <tr>
+      <td><code>update_config</code></td>
+      <td><code>update_config</code> object</td>
+      <td>Parameters for controlling the rate and policy of rolling updates. </td>
+    </tr>
+    <tr>
+      <td><code>constraints</code></td>
+      <td>dict</td>
+      <td>Scheduling constraints for the tasks. See the section on the <a href="#SchedulingConstraints">constraint specification language</a></td>
+    </tr>
+    <tr>
+      <td><code>service</code></td>
+      <td>Boolean</td>
+      <td>If True, restart tasks regardless of success or failure. (Default: False)</td>
+    </tr>
+    <tr>
+      <td><code>daemon</code></td>
+      <td>Boolean</td>
+      <td>A DEPRECATED alias for "service". (Default: False) </td>
+    </tr>
+    <tr>
+      <td><code>max_task_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of failures after which the task is considered to have failed (Default: 1) Set to -1 to allow for infinite failures</td>
+    </tr>
+    <tr>
+      <td><code>priority</code></td>
+      <td>Integer</td>
+      <td>Preemption priority to give the task (Default 0). Tasks with higher priorities may preempt tasks at lower priorities.</td>
+    </tr>
+    <tr>
+      <td><code>production</code></td>
+      <td>Boolean</td>
+      <td>Whether or not this is a production task backed by quota (Default: False) Production jobs may preempt any non-production job, and may only be preempted by production jobs in the same role and of higher priority. To run jobs at this level, the job role must have the appropriate quota.</td>
+    </tr>
+    <tr>
+      <td><code>health_check_config</code></td>
+      <td><code>heath_check_config</code> object</td>
+      <td>Parameters for controlling a task's health checks via HTTP. Only used if a  health port was assigned with a command line wildcard.</td>
+    </tr>
+  </tbody>
+</table>
+
+### <a name="Services"></a> Services
+
+Jobs with the `service` flag set to True are called Services. The `Service`
+alias can be used as shorthand for `Job` with `service=True`.
+Services are differentiated from non-service Jobs in that tasks
+always restart on completion, whether successful or unsuccessful.
+Jobs without the service bit set only restart up to
+`max_task_failures` times and only if they terminated unsuccessfully
+either due to human error or machine failure.
+
+### <a name="UpdateConfigObjects"></a> `UpdateConfig` Objects
+
+Parameters for controlling the rate and policy of rolling updates.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>batch_size</code></td>
+      <td>Integer</td>
+      <td>Maximum number of shards to be updated in one iteration (Default: 1)</td>
+    </tr>
+    <tr>
+      <td><code>restart_threshold</code></td>
+      <td>Integer</td>
+      <td>Maximum number of seconds before a shard must move into the <code>RUNNING</code> state before considered a failure (Default: 60)</td>
+    </tr>
+    <tr>
+      <td><code>watch_secs</code></td>
+      <td>Integer</td>
+      <td>Minimum number of seconds a shard must remain in <code>RUNNING</code> state before considered a success (Default: 30)</td>
+    </tr>
+    <tr>
+      <td><code>max_per_shard_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of restarts per shard during update. Increments total failure count when this limit is exceeded. (Default: 0)</td>
+    </tr>
+    <tr>
+      <td><code>max_total_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of shard failures to be tolerated in total during an update. Cannot be greater than or equal to the total number of tasks in a job. (Default: 0)</td>
+    </tr>
+  </tbody>
+</table>
+
+### <a name="HealthCheckConfigObject"></a> `HealthCheckConfig` Objects
+
+Parameters for controlling a task's health checks via HTTP.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>initial_interval_secs</code></td>
+      <td>Integer</td>
+      <td>Initial delay for performing an HTTP health check. (Default: 60)</td>
+    </tr>
+    <tr>
+      <td><code>interval_secs</code></td>
+      <td>Integer</td>
+      <td>Interval on which to check the task's health via HTTP. (Default: 30)</td>
+    </tr>
+    <tr>
+      <td><code>timeout_secs</code></td>
+      <td>Integer</td>
+      <td>HTTP request timeout. (Default: 1)</td>
+    </tr>
+    <tr>
+      <td><code>max_consecutive_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of consecutive failures that tolerated before considering a task unhealthy (Default: 0)</td>
+     </tr>
+    </tbody>
+</table>
+
+<a name="SchedulingConstraints"></a>Specifying Scheduling Constraints
+=====================================================================
+
+Most users will not need to specify constraints explicitly, as the
+scheduler automatically inserts reasonable defaults that attempt to
+ensure reliability without impacting schedulability. For example, the
+scheduler inserts a `host: limit:1` constraint, ensuring
+that your shards run on different physical machines. Please do not
+set this field unless you are sure of what you are doing.
+
+In the `Job` object there is a map `constraints` from String to String
+allowing the user to tailor the schedulability of tasks within the job.
+
+Each slave in the cluster is assigned a set of string-valued
+key/value pairs called attributes. For example, consider the host
+`cluster1-aaa-03-sr2` and its following attributes (given in key:value
+format): `host:cluster1-aaa-03-sr2` and `rack:aaa`.
+
+The constraint map's key value is the attribute name in which we
+constrain Tasks within our Job. The value is how we constrain them.
+There are two types of constraints: *limit constraints* and *value
+constraints*.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td>Limit Constraint</td>
+      <td>A string that specifies a limit for a constraint. Starts with <code>'limit:</code> followed by an Integer and closing single quote, such as <code>'limit:1'</code>.</td>
+    </tr>
+    <tr>
+      <td>Value Constraint</td>
+      <td>A string that specifies a value for a constraint. To include a list of values, separate the values using commas. To negate the values of a constraint, start with a <code>!</code><code>.</code></td>
+    </tr>
+  </tbody>
+</table>
+
+You can also control machine diversity using constraints. The below
+constraint ensures that no more than two instances of your job may run
+on a single host. Think of this as a "group by" limit.
+
+        constraints = {
+          'host': 'limit:2',
+        }
+
+Likewise, you can use constraints to control rack diversity, e.g. at
+most one task per rack:
+
+        constraints = {
+          'rack': 'limit:1',
+        }
+
+Use these constraints sparingly as they can dramatically reduce Tasks' schedulability.
+
+<a name="TemplateNamespaces"></a>Template Namespaces
+====================================================
+
+Currently, a few Pystachio namespaces have special semantics. Using them
+in your configuration allow you to tailor application behavior
+through environment introspection or interact in special ways with the
+Aurora client or Aurora-provided services.
+
+### <a name="mesosNamespace"></a> `mesos` Namespace
+
+The `mesos` namespace contains the `instance` variable that can be used
+to distinguish between Task replicas.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>instance</code></td>
+      <td>Integer</td>
+      <td>The instance number of the created task. A job with 5 replicas has instance numbers 0, 1, 2, 3, and 4.</td>
+    </tr>
+  </tbody>
+</table>
+
+### <a name="thermosNamespace"></a> `thermos` Namespace
+
+The `thermos` namespace contains variables that work directly on the
+Thermos platform in addition to Aurora. This namespace is fully
+compatible with Tasks invoked via the `thermos` CLI.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>ports</code></td>
+      <td>map of string to Integer</td>
+      <td>A map of names to port numbers</td>
+    </tr>
+    <tr>
+      <td><code>task_id</code></td>
+      <td>string</td>
+      <td>The task ID assigned to this task.</td>
+    </tr>
+  </tbody>
+</table>
+
+The `thermos.ports` namespace is automatically populated by Aurora when
+invoking tasks on Mesos. When running the `thermos` command directly,
+these ports must be explicitly mapped with the `-P` option.
+
+For example, if '{{`thermos.ports[http]`}}' is specified in a `Process`
+configuration, it is automatically extracted and auto-populated by
+Aurora, but must be specified with, for example, `thermos -P http:12345`
+to map `http` to port 12345 when running via the CLI.
+
+<a name="BasicExamples"></a>Basic Examples
+==========================================
+
+These are provided to give a basic understanding of simple Aurora jobs.
+
+### <a name="hello_world.aurora"></a> `hello_world.aurora`
+
+Put the following in a file named `hello_world.aurora`, substituting your own values
+for values such as `cluster`'s.
+
+        import os
+        hello_world_process = Process(name = 'hello_world', cmdline = 'echo hello world')
+
+        hello_world_task = Task(
+          resources = Resources(cpu = 0.1, ram = 16 * MB, disk = 16 * MB),
+          processes = [hello_world_process])
+
+        hello_world_job = Job(
+          cluster = 'cluster1',
+          role = os.getenv('USER'),
+          task = hello_world_task)
+
+        jobs = [hello_world_job]
+
+Then issue the following commands to create and kill the job, using your own values for the job key.
+
+        `aurora create cluster1/$USER/test/hello_world hello_world.aurora`
+
+        `aurora kill cluster1/$USER/test/hello_world `
+
+### <a name="EnvironmentTailoring"></a> Environment Tailoring
+
+### `hello_world_productionized.aurora`
+
+Put the following in a file named `hello_world_productionized.aurora`, substituting your own values
+for values such as `cluster`'s.
+
+        include('hello_world.aurora')
+
+        production_resources = Resources(cpu = 1.0, ram = 512 * MB, disk = 2 * GB)
+        staging_resources = Resources(cpu = 0.1, ram = 32 * MB, disk = 512 * MB)
+        hello_world_template = hello_world(
+            name = "hello_world-{{cluster}}"
+            task = hello_world(resources=production_resources))
+
+        jobs = [
+          # production jobs
+          hello_world_template(cluster = 'cluster1', instances = 25),
+          hello_world_template(cluster = 'cluster2', instances = 15),
+
+          # staging jobs
+          hello_world_template(
+            cluster = 'local',
+            instances = 1,
+            task = hello_world(resources=staging_resources)),
+        ]
+
+Then issue the following commands to create and kill the job, using your own values for the job key
+
+        aurora create cluster1/$USER/test/hello_world-cluster1 hello_world_productionized.aurora
+
+        aurora kill cluster1/$USER/test/hello_world-cluster1
+
+