You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@aurora.apache.org by dl...@apache.org on 2014/03/25 07:10:07 UTC

svn commit: r1581246 [4/6] - in /incubator/aurora/site: ./ publish/ publish/community/ publish/developers/ publish/docs/gettingstarted/ publish/docs/howtocontribute/ publish/documentation/ publish/documentation/latest/ publish/documentation/latest/clie...

Added: incubator/aurora/site/publish/documentation/latest/userguide/index.html
URL: http://svn.apache.org/viewvc/incubator/aurora/site/publish/documentation/latest/userguide/index.html?rev=1581246&view=auto
==============================================================================
--- incubator/aurora/site/publish/documentation/latest/userguide/index.html (added)
+++ incubator/aurora/site/publish/documentation/latest/userguide/index.html Tue Mar 25 06:10:05 2014
@@ -0,0 +1,412 @@
+<html>
+    <head>
+        <meta charset="utf-8">
+        <title>Apache Aurora</title>
+		    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+		    <meta name="description" content="">
+		    <meta name="author" content="">
+
+		    <link href="/assets/css/bootstrap.css" rel="stylesheet">
+		    <link href="/assets/css/bootstrap-responsive.min.css" rel="stylesheet">
+		    <link href="/assets/css/main.css" rel="stylesheet">
+				
+		    <!-- JS -->
+		    <script type="text/javascript" src="/assets/js/jquery-1.10.1.min.js"></script>
+		    <script type="text/javascript" src="/assets/js/bootstrap-dropdown.js"></script>
+		
+				<!-- Analytics -->
+				<script type="text/javascript">
+					  var _gaq = _gaq || [];
+					  _gaq.push(['_setAccount', 'UA-45879646-1']);
+					  _gaq.push(['_setDomainName', 'apache.org']);
+					  _gaq.push(['_trackPageview']);
+
+					  (function() {
+					    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+					    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+					    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+					  })();
+				</script>
+	</head>
+    <body>	
+      <div class="navbar navbar-static-top">
+  <div class="navbar-inner">
+    <div class="container">
+	    <a href="/" class="logo"><img src="/assets/img/aurora_logo.png" alt="Apache Aurora logo" /></a>
+      <ul class="nav">
+        <li><a href="/docs/gettingstarted/">Getting Started</a></li>
+				<li><a href="/documentation/latest/">Documentation</a></li>
+        <li><a href="/downloads/">Download</a></li>
+        <li><a href="/community">Community</a></li>
+      </ul>
+    </div>
+  </div>
+</div>
+
+<div class="container">
+<!-- magical breadcrumbs -->
+<ul class="breadcrumb">
+  <li>
+    <div class="dropdown">
+      <a class="dropdown-toggle" data-toggle="dropdown" href="#">Apache Software Foundation <b class="caret"></b></a>
+      <ul class="dropdown-menu" role="menu">
+        <li><a href="http://www.apache.org">Apache Homepage</a></li>
+        <li><a href="http://www.apache.org/licenses/">Apache License</a></li>
+        <li><a href="http://www.apache.org/foundation/sponsorship.html">Sponsorship</a></li>  
+        <li><a href="http://www.apache.org/foundation/thanks.html">Thanks</a></li>
+        <li><a href="http://www.apache.org/security/">Security</a></li>
+      </ul>
+    </div>
+  </li>
+  <li><span class="divider">&bull;</span></li>
+  <li><a href="http://incubator.apache.org">Apache Incubator</a></li>
+  <li><span class="divider">&bull;</span></li>
+  <li><a href="http://aurora.incubator.apache.org">Apache Aurora</a></li>
+</ul>
+<!-- /breadcrumb -->
+	
+      <div class="container">
+        <h2 id="aurora-user-guide">Aurora User Guide</h2>
+
+<ul>
+<li><a href="#overview">Overview</a></li>
+<li><a href="#job-lifecycle">Job Lifecycle</a>
+
+<ul>
+<li><a href="#life-of-a-task">Life Of A Task</a></li>
+<li><a href="#pending-to-running-states">PENDING to RUNNING states</a></li>
+<li><a href="#task-updates">Task Updates</a></li>
+<li><a href="#giving-priority-to-production-tasks-preempting">Giving Priority to Production Tasks: PREEMPTING</a></li>
+<li><a href="#natural-termination-finished-failed">Natural Termination: FINISHED, FAILED</a></li>
+<li><a href="#forceful-termination-killing-restarting">Forceful Termination: KILLING, RESTARTING</a></li>
+</ul></li>
+<li><a href="#configuration">Configuration</a></li>
+<li><a href="#creating-jobs">Creating Jobs</a></li>
+<li><a href="#interacting-with-jobs">Interacting With Jobs</a></li>
+</ul>
+
+<h2 id="overview">Overview</h2>
+
+<p>This document gives an overview of how Aurora works under the hood.
+It assumes you&rsquo;ve already worked through the &ldquo;hello world&rdquo; example
+job in the <a href="/documentation/latest/tutorial/">Aurora Tutorial</a>. Specifics of how to use Aurora are <strong>not</strong>
+ given here, but pointers to documentation about how to use Aurora are
+provided.</p>
+
+<p>Aurora is a Mesos framework used to schedule <em>jobs</em> onto Mesos. Mesos
+cares about individual <em>tasks</em>, but typical jobs consist of dozens or
+hundreds of task replicas. Aurora provides a layer on top of Mesos with
+its <code>Job</code> abstraction. An Aurora <code>Job</code> consists of a task template and
+instructions for creating near-identical replicas of that task (modulo
+things like &ldquo;instance id&rdquo; or specific port numbers which may differ from
+machine to machine).</p>
+
+<p>How many tasks make up a Job is complicated. On a basic level, a Job consists of
+one task template and instructions for creating near-idential replicas of that task
+(otherwise referred to as &ldquo;instances&rdquo; or &ldquo;shards&rdquo;).</p>
+
+<p>However, since Jobs can be updated on the fly, a single Job identifier or <em>job key</em>
+can have multiple job configurations associated with it.</p>
+
+<p>For example, consider when I have a Job with 4 instances that each
+request 1 core of cpu, 1 GB of RAM, and 1 GB of disk space as specified
+in the configuration file <code>hello_world.aurora</code>. I want to
+update it so it requests 2 GB of RAM instead of 1. I create a new
+configuration file to do that called <code>new_hello_world.aurora</code> and
+issue a <code>aurora update --shards=0-1 &lt;job_key_value&gt; new_hello_world.aurora</code>
+command.</p>
+
+<p>This results in instances 0 and 1 having 1 cpu, 2 GB of RAM, and 1 GB of disk space,
+while instances 2 and 3 have 1 cpu, 1 GB of RAM, and 1 GB of disk space. If instance 3
+dies and restarts, it restarts with 1 cpu, 1 GB RAM, and 1 GB disk space.</p>
+
+<p>So that means there are two simultaneous task configurations for the same Job
+at the same time, just valid for different ranges of instances.</p>
+
+<p>This isn&rsquo;t a recommended pattern, but it is valid and supported by the
+Aurora scheduler. This most often manifests in the &ldquo;canary pattern&rdquo; where
+instance 0 runs with a different configuration than instances 1-N to test
+different code versions alongside the actual production job.</p>
+
+<p>A task can merely be a single <em>process</em> corresponding to a single
+command line, such as <code>python2.6 my_script.py</code>. However, a task can also
+consist of many separate processes, which all run within a single
+sandbox. For example, running multiple cooperating agents together,
+such as <code>logrotate</code>, <code>installer</code>, master, or slave processes. This is
+where Thermos  comes in. While Aurora provides a <code>Job</code> abstraction on
+top of Mesos <code>Tasks</code>, Thermos provides a <code>Process</code> abstraction
+underneath Mesos <code>Task</code>s and serves as part of the Aurora framework&rsquo;s
+executor.</p>
+
+<p>You define <code>Job</code>s,<code>Task</code>s, and <code>Process</code>es in a configuration file.
+Configuration files are written in Python, and make use of the Pystachio
+templating language. They end in a <code>.aurora</code> extension.</p>
+
+<p>Pystachio is a type-checked dictionary templating library.</p>
+
+<blockquote>
+<p>TL;DR</p>
+
+<ul>
+<li>  Aurora manages jobs made of tasks.</li>
+<li>  Mesos manages tasks made of processes.</li>
+<li>  Thermos manages processes.</li>
+<li>  All defined in <code>.aurora</code> configuration file.</li>
+</ul>
+</blockquote>
+
+<p><img alt="Aurora hierarchy" src="../images/aurora_hierarchy.png" /></p>
+
+<p>Each <code>Task</code> has a <em>sandbox</em> created when the <code>Task</code> starts and garbage
+collected when it finishes. All of a <code>Task&#39;</code>s processes run in its
+sandbox, so processes can share state by using a shared current working
+directory.</p>
+
+<p>The sandbox garbage collection policy considers many factors, most
+importantly age and size. It makes a best-effort attempt to keep
+sandboxes around as long as possible post-task in order for service
+owners to inspect data and logs, should the <code>Task</code> have completed
+abnormally. But you can&rsquo;t design your applications assuming sandboxes
+will be around forever, e.g. by building log saving or other
+checkpointing mechanisms directly into your application or into your
+<code>Job</code> description.</p>
+
+<h2 id="job-lifecycle">Job Lifecycle</h2>
+
+<p>When Aurora reads a configuration file and finds a <code>Job</code> definition, it:</p>
+
+<ol>
+<li> Evaluates the <code>Job</code> definition.</li>
+<li> Splits the <code>Job</code> into its constituent <code>Task</code>s.</li>
+<li> Sends those <code>Task</code>s to the scheduler.</li>
+<li> The scheduler puts the <code>Task</code>s into <code>PENDING</code> state, starting each
+<code>Task</code>&rsquo;s life cycle.</li>
+</ol>
+
+<p><strong>Note</strong>: It is not currently possible to create an Aurora job from
+within an Aurora job.</p>
+
+<h3 id="life-of-a-task">Life Of A Task</h3>
+
+<p><img alt="Life of a task" src="../images/lifeofatask.png" /></p>
+
+<h3 id="pending-to-running-states">PENDING to RUNNING states</h3>
+
+<p>When a <code>Task</code> is in the <code>PENDING</code> state, the scheduler constantly
+searches for machines satisfying that <code>Task</code>&rsquo;s resource request
+requirements (RAM, disk space, CPU time) while maintaining configuration
+constraints such as &ldquo;a <code>Task</code> must run on machines  dedicated  to a
+particular role&rdquo; or attribute limit constraints such as &ldquo;at most 2
+<code>Task</code>s from the same <code>Job</code> may run on each rack&rdquo;. When the scheduler
+finds a suitable match, it assigns the <code>Task</code> to a machine and puts the
+<code>Task</code> into the <code>ASSIGNED</code> state.</p>
+
+<p>From the <code>ASSIGNED</code> state, the scheduler sends an RPC to the slave
+machine containing <code>Task</code> configuration, which the slave uses to spawn
+an executor responsible for the <code>Task</code>&rsquo;s lifecycle. When the scheduler
+receives an acknowledgement that the machine has accepted the <code>Task</code>,
+the <code>Task</code> goes into <code>STARTING</code> state.</p>
+
+<p><code>STARTING</code> state initializes a <code>Task</code> sandbox. When the sandbox is fully
+initialized, Thermos begins to invoke <code>Process</code>es. Also, the slave
+machine sends an update to the scheduler that the <code>Task</code> is
+in <code>RUNNING</code> state.</p>
+
+<p>If a <code>Task</code> stays in <code>ASSIGNED</code> or <code>STARTING</code> for too long, the
+scheduler forces it into <code>LOST</code> state, creating a new <code>Task</code> in its
+place that&rsquo;s sent into <code>PENDING</code> state. This is technically true of any
+active state: if the Mesos core tells the scheduler that a slave has
+become unhealthy (or outright disappeared), the <code>Task</code>s assigned to that
+slave go into <code>LOST</code> state and new <code>Task</code>s are created in their place.
+From <code>PENDING</code> state, there is no guarantee a <code>Task</code> will be reassigned
+to the same machine unless job constraints explicitly force it there.</p>
+
+<p>If there is a state mismatch, (e.g. a machine returns from a <code>netsplit</code>
+and the scheduler has marked all its <code>Task</code>s <code>LOST</code> and rescheduled
+them), a state reconciliation process kills the errant <code>RUNNING</code> tasks,
+which may take up to an hour. But to emphasize this point: there is no
+uniqueness guarantee for a single instance of a job in the presence of
+network partitions. If the Task requires that, it should be baked in at
+the application level using a distributed coordination service such as
+Zookeeper.</p>
+
+<h3 id="task-updates">Task Updates</h3>
+
+<p><code>Job</code> configurations can be updated at any point in their lifecycle.
+Usually updates are done incrementally using a process called a <em>rolling
+upgrade</em>, in which Tasks are upgraded in small groups, one group at a
+time.  Updates are done using various Aurora Client commands.</p>
+
+<p>For a configuration update, the Aurora Client calculates required changes
+by examining the current job config state and the new desired job config.
+It then starts a rolling batched update process by going through every batch
+and performing these operations:</p>
+
+<ul>
+<li>If an instance is present in the scheduler but isn&rsquo;t in the new config,
+then that instance is killed.</li>
+<li>If an instance is not present in the scheduler but is present in
+the new config, then the instance is created.</li>
+<li>If an instance is present in both the scheduler the new config, then
+the client diffs both task configs. If it detects any changes, it
+performs an instance update by killing the old config instance and adds
+the new config instance.</li>
+</ul>
+
+<p>The Aurora client continues through the instance list until all tasks are
+updated, in <code>RUNNING,</code> and healthy for a configurable amount of time.
+If the client determines the update is not going well (a percentage of health
+checks have failed), it cancels the update.</p>
+
+<p>Update cancellation runs a procedure similar to the described above
+update sequence, but in reverse order. New instance configs are swapped
+with old instance configs and batch updates proceed backwards
+from the point where the update failed. E.g.; (0,1,2) (3,4,5) (6,7,
+8-FAIL) results in a rollback in order (8,7,6) (5,4,3) (2,1,0).</p>
+
+<h3 id="giving-priority-to-production-tasks:-preempting">Giving Priority to Production Tasks: PREEMPTING</h3>
+
+<p>Sometimes a Task needs to be interrupted, such as when a non-production
+Task&rsquo;s resources are needed by a higher priority production Task. This
+type of interruption is called a <em>pre-emption</em>. When this happens in
+Aurora, the non-production Task is killed and moved into
+the <code>PREEMPTING</code> state  when both the following are true:</p>
+
+<ul>
+<li>The task being killed is a non-production task.</li>
+<li>The other task is a <code>PENDING</code> production task that hasn&rsquo;t been
+scheduled due to a lack of resources.</li>
+</ul>
+
+<p>Since production tasks are much more important, Aurora kills off the
+non-production task to free up resources for the production task. The
+scheduler UI shows the non-production task was preempted in favor of the
+production task. At some point, tasks in <code>PREEMPTING</code> move to <code>KILLED</code>.</p>
+
+<p>Note that non-production tasks consuming many resources are likely to be
+preempted in favor of production tasks.</p>
+
+<h3 id="natural-termination:-finished,-failed">Natural Termination: FINISHED, FAILED</h3>
+
+<p>A <code>RUNNING</code> <code>Task</code> can terminate without direct user interaction. For
+example, it may be a finite computation that finishes, even something as
+simple as <code>echo hello world.</code>Or it could be an exceptional condition in
+a long-lived service. If the <code>Task</code> is successful (its underlying
+processes have succeeded with exit status <code>0</code> or finished without
+reaching failure limits) it moves into <code>FINISHED</code> state. If it finished
+after reaching a set of failure limits, it goes into <code>FAILED</code> state.</p>
+
+<h3 id="forceful-termination:-killing,-restarting">Forceful Termination: KILLING, RESTARTING</h3>
+
+<p>You can terminate a <code>Task</code> by issuing an <code>aurora kill</code> command, which
+moves it into <code>KILLING</code> state. The scheduler then sends the slave  a
+request to terminate the <code>Task</code>. If the scheduler receives a successful
+response, it moves the Task into <code>KILLED</code> state and never restarts it.</p>
+
+<p>The scheduler has access to a non-public <code>RESTARTING</code> state. If a <code>Task</code>
+is forced into the <code>RESTARTING</code> state, the scheduler kills the
+underlying task but in parallel schedules an identical replacement for
+it.</p>
+
+<h2 id="configuration">Configuration</h2>
+
+<p>You define and configure your Jobs (and their Tasks and Processes) in
+Aurora configuration files. Their filenames end with the <code>.aurora</code>
+suffix, and you write them in Python making use of the Pystashio
+templating language, along
+with specific Aurora, Mesos, and Thermos commands and methods. See the
+<a href="/documentation/latest/configurationreference/">Configuration Guide and Reference</a> and
+<a href="/documentation/latest/configurationtutorial/">Configuration Tutorial</a>.</p>
+
+<h2 id="creating-jobs">Creating Jobs</h2>
+
+<p>You create and manipulate Aurora Jobs with the Aurora client, which starts all its
+command line commands with
+<code>aurora</code>. See <a href="/documentation/latest/clientcommands/">Aurora Client Commands</a> for details
+about the Aurora Client.</p>
+
+<h2 id="interacting-with-jobs">Interacting With Jobs</h2>
+
+<p>You interact with Aurora jobs either via:</p>
+
+<ul>
+<li>Read-only Web UIs</li>
+</ul>
+
+<p>Part of the output from creating a new Job is a URL for the Job&rsquo;s scheduler UI page.</p>
+
+<p>For example:</p>
+<pre class="highlight text">  vagrant@precise64:~$ aurora create example/www-data/prod/hello \
+  /vagrant/examples/jobs/hello_world.aurora
+  INFO] Creating job hello
+  INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
+  INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
+</pre>
+<p>The &ldquo;Job url&rdquo; goes to the Job&rsquo;s scheduler UI page. To go to the overall scheduler UI page,
+  stop at the &ldquo;scheduler&rdquo; part of the URL, in this case, <code>http://precise64:8081/scheduler</code></p>
+
+<p>You can also reach the scheduler UI page via the Client command <code>aurora open</code>:</p>
+<pre class="highlight text">  aurora open [&lt;cluster&gt;[/&lt;role&gt;[/&lt;env&gt;/&lt;job_name&gt;]]]
+</pre>
+<p>If only the cluster is specified, it goes directly to that cluster&rsquo;s scheduler main page.
+  If the role is specified, it goes to the top-level role page. If the full job key is specified,
+  it goes directly to the job page where you can inspect individual tasks.</p>
+
+<p>Once you click through to a role page, you see Jobs arranged separately by pending jobs, active
+  jobs, and finished jobs. Jobs are arranged by role, typically a service account for production
+  jobs and user accounts for test or development jobs.</p>
+
+<ul>
+<li>The Aurora Client&rsquo;s command line interface</li>
+</ul>
+
+<p>Several Client commands have a <code>-o</code> option that automatically opens a window to
+  the specified Job&rsquo;s scheduler UI URL. And, as described above, the <code>open</code> command also takes
+  you there.</p>
+
+<p>For a complete list of Aurora Client commands, use <code>aurora help</code> and, for specific
+  command help, <code>aurora help [command]</code>. <strong>Note</strong>: <code>aurora help open</code>
+  returns <code>&quot;subcommand open not found&quot;</code> due to our reflection tricks not
+  working on words that are also builtin Python function names. Or see the
+  <a href="/documentation/latest/clientcommands/">Aurora Client Commands</a> document.</p>
+
+	  </div>
+      <div class="container">
+    <hr>
+    <footer class="footer">
+        <div class="row-fluid">
+            <div class="span2 text-left">
+                <h3>Links</h3>
+                <ul class="unstyled">
+                	<li><a href="/docs/gettingstarted/">Getting Started</a></li>
+                    <li><a href="/downloads/">Downloads</a></li>
+                    <li><a href="/developers/">Developers</a></li>                    
+                </ul>
+            </div>
+            <div class="span3 text-left">
+                <h3>Community</h3>
+                <ul class="unstyled">
+                    <li><a href="/community/">Mailing Lists</a></li>
+                    <li><a href="http://issues.apache.org/jira/browse/aurora">Issue Tracking</a></li>
+                    <li><a href="/docs/howtocontribute/">How To Contribute</a></li>
+                </ul>
+            </div>
+            <div class="span7 text-left">
+            	<h3>Apache Software Foundation</h3>
+
+							<div class="span8">
+                Copyright 2014 <a href="http://www.apache.org/">Apache Software Foundation</a>. Licensed under the <a href="http://www.apache.org/licenses/">Apache License v2.0</a>. Apache, Apache Thrift, and the Apache feather logo are trademarks of The Apache Software Foundation. Currently part of the <a href="http://incubator.apache.org">Apache Incubator</a>.
+							</div>
+							<div class=" pull-right">
+								<a href="http://incubator.apache.org" class="logo"><img src="/assets/img/apache_incubator_logo.png" alt="Apache Incubator" class="pull-right"/></a>
+							</div>
+            </div>
+
+        </div>
+
+    </footer>
+</div>
+
+	</body>
+</html>
+

Added: incubator/aurora/site/publish/documentation/latest/vagrant/index.html
URL: http://svn.apache.org/viewvc/incubator/aurora/site/publish/documentation/latest/vagrant/index.html?rev=1581246&view=auto
==============================================================================
--- incubator/aurora/site/publish/documentation/latest/vagrant/index.html (added)
+++ incubator/aurora/site/publish/documentation/latest/vagrant/index.html Tue Mar 25 06:10:05 2014
@@ -0,0 +1,127 @@
+<html>
+    <head>
+        <meta charset="utf-8">
+        <title>Apache Aurora</title>
+		    <meta name="viewport" content="width=device-width, initial-scale=1.0">
+		    <meta name="description" content="">
+		    <meta name="author" content="">
+
+		    <link href="/assets/css/bootstrap.css" rel="stylesheet">
+		    <link href="/assets/css/bootstrap-responsive.min.css" rel="stylesheet">
+		    <link href="/assets/css/main.css" rel="stylesheet">
+				
+		    <!-- JS -->
+		    <script type="text/javascript" src="/assets/js/jquery-1.10.1.min.js"></script>
+		    <script type="text/javascript" src="/assets/js/bootstrap-dropdown.js"></script>
+		
+				<!-- Analytics -->
+				<script type="text/javascript">
+					  var _gaq = _gaq || [];
+					  _gaq.push(['_setAccount', 'UA-45879646-1']);
+					  _gaq.push(['_setDomainName', 'apache.org']);
+					  _gaq.push(['_trackPageview']);
+
+					  (function() {
+					    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+					    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+					    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+					  })();
+				</script>
+	</head>
+    <body>	
+      <div class="navbar navbar-static-top">
+  <div class="navbar-inner">
+    <div class="container">
+	    <a href="/" class="logo"><img src="/assets/img/aurora_logo.png" alt="Apache Aurora logo" /></a>
+      <ul class="nav">
+        <li><a href="/docs/gettingstarted/">Getting Started</a></li>
+				<li><a href="/documentation/latest/">Documentation</a></li>
+        <li><a href="/downloads/">Download</a></li>
+        <li><a href="/community">Community</a></li>
+      </ul>
+    </div>
+  </div>
+</div>
+
+<div class="container">
+<!-- magical breadcrumbs -->
+<ul class="breadcrumb">
+  <li>
+    <div class="dropdown">
+      <a class="dropdown-toggle" data-toggle="dropdown" href="#">Apache Software Foundation <b class="caret"></b></a>
+      <ul class="dropdown-menu" role="menu">
+        <li><a href="http://www.apache.org">Apache Homepage</a></li>
+        <li><a href="http://www.apache.org/licenses/">Apache License</a></li>
+        <li><a href="http://www.apache.org/foundation/sponsorship.html">Sponsorship</a></li>  
+        <li><a href="http://www.apache.org/foundation/thanks.html">Thanks</a></li>
+        <li><a href="http://www.apache.org/security/">Security</a></li>
+      </ul>
+    </div>
+  </li>
+  <li><span class="divider">&bull;</span></li>
+  <li><a href="http://incubator.apache.org">Apache Incubator</a></li>
+  <li><span class="divider">&bull;</span></li>
+  <li><a href="http://aurora.incubator.apache.org">Apache Aurora</a></li>
+</ul>
+<!-- /breadcrumb -->
+	
+      <div class="container">
+        <p>Aurora includes a <code>Vagrantfile</code> that defines a full Mesos cluster running Aurora. You can use it to
+explore Aurora&rsquo;s various components. To get started, install
+<a href="https://www.virtualbox.org/">VirtualBox</a> and <a href="http://www.vagrantup.com/">Vagrant</a>,
+then run <code>vagrant up</code> somewhere in the repository source tree to create a team of VMs.  This may take some time initially as it builds all
+the components involved in running an aurora cluster.</p>
+
+<p>The scheduler is listening on <a href="http://192.168.33.5:8081/scheduler">http://192.168.33.5:8081/scheduler</a>
+The observer is listening on <a href="http://192.168.33.4:1338/">http://192.168.33.4:1338/</a>
+The master is listening on <a href="http://192.168.33.3:5050/">http://192.168.33.3:5050/</a></p>
+
+<p>Once everything is up, you can <code>vagrant ssh aurora-scheduler</code> and execute aurora client commands using the <code>aurora</code> client.</p>
+
+<h2 id="troubleshooting">Troubleshooting</h2>
+
+<p>Most of the vagrant related problems can be fixed by the following steps:
+* Destroying the vagrant environment with <code>vagrant destroy</code>
+* Cleaning the repository of build artifacts and other intermediate output with <code>git clean -fdx</code>
+* Bringing up the vagrant environment with <code>vagrant up</code></p>
+
+	  </div>
+      <div class="container">
+    <hr>
+    <footer class="footer">
+        <div class="row-fluid">
+            <div class="span2 text-left">
+                <h3>Links</h3>
+                <ul class="unstyled">
+                	<li><a href="/docs/gettingstarted/">Getting Started</a></li>
+                    <li><a href="/downloads/">Downloads</a></li>
+                    <li><a href="/developers/">Developers</a></li>                    
+                </ul>
+            </div>
+            <div class="span3 text-left">
+                <h3>Community</h3>
+                <ul class="unstyled">
+                    <li><a href="/community/">Mailing Lists</a></li>
+                    <li><a href="http://issues.apache.org/jira/browse/aurora">Issue Tracking</a></li>
+                    <li><a href="/docs/howtocontribute/">How To Contribute</a></li>
+                </ul>
+            </div>
+            <div class="span7 text-left">
+            	<h3>Apache Software Foundation</h3>
+
+							<div class="span8">
+                Copyright 2014 <a href="http://www.apache.org/">Apache Software Foundation</a>. Licensed under the <a href="http://www.apache.org/licenses/">Apache License v2.0</a>. Apache, Apache Thrift, and the Apache feather logo are trademarks of The Apache Software Foundation. Currently part of the <a href="http://incubator.apache.org">Apache Incubator</a>.
+							</div>
+							<div class=" pull-right">
+								<a href="http://incubator.apache.org" class="logo"><img src="/assets/img/apache_incubator_logo.png" alt="Apache Incubator" class="pull-right"/></a>
+							</div>
+            </div>
+
+        </div>
+
+    </footer>
+</div>
+
+	</body>
+</html>
+

Modified: incubator/aurora/site/publish/downloads/index.html
URL: http://svn.apache.org/viewvc/incubator/aurora/site/publish/downloads/index.html?rev=1581246&r1=1581245&r2=1581246&view=diff
==============================================================================
--- incubator/aurora/site/publish/downloads/index.html (original)
+++ incubator/aurora/site/publish/downloads/index.html Tue Mar 25 06:10:05 2014
@@ -35,6 +35,7 @@
 	    <a href="/" class="logo"><img src="/assets/img/aurora_logo.png" alt="Apache Aurora logo" /></a>
       <ul class="nav">
         <li><a href="/docs/gettingstarted/">Getting Started</a></li>
+				<li><a href="/documentation/latest/">Documentation</a></li>
         <li><a href="/downloads/">Download</a></li>
         <li><a href="/community">Community</a></li>
       </ul>
@@ -65,7 +66,7 @@
 <!-- /breadcrumb -->
 	
       <div class="container">
-        <h1>Downloads</h1>
+        <h1 id="downloads">Downloads</h1>
 
 <p>The Aurora team recently migrated their codebase to the Apache Software Foundation prior to a first official release. A pre-release version is available on GitHub:</p>
 

Modified: incubator/aurora/site/publish/index.html
URL: http://svn.apache.org/viewvc/incubator/aurora/site/publish/index.html?rev=1581246&r1=1581245&r2=1581246&view=diff
==============================================================================
--- incubator/aurora/site/publish/index.html (original)
+++ incubator/aurora/site/publish/index.html Tue Mar 25 06:10:05 2014
@@ -35,6 +35,7 @@
 	    <a href="/" class="logo"><img src="/assets/img/aurora_logo.png" alt="Apache Aurora logo" /></a>
       <ul class="nav">
         <li><a href="/docs/gettingstarted/">Getting Started</a></li>
+				<li><a href="/documentation/latest/">Documentation</a></li>
         <li><a href="/downloads/">Download</a></li>
         <li><a href="/community">Community</a></li>
       </ul>

Modified: incubator/aurora/site/source/_header.md.erb
URL: http://svn.apache.org/viewvc/incubator/aurora/site/source/_header.md.erb?rev=1581246&r1=1581245&r2=1581246&view=diff
==============================================================================
--- incubator/aurora/site/source/_header.md.erb (original)
+++ incubator/aurora/site/source/_header.md.erb Tue Mar 25 06:10:05 2014
@@ -4,6 +4,7 @@
 	    <a href="/" class="logo"><img src="/assets/img/aurora_logo.png" alt="Apache Aurora logo" /></a>
       <ul class="nav">
         <li><a href="/docs/gettingstarted/">Getting Started</a></li>
+				<li><a href="/documentation/latest/">Documentation</a></li>
         <li><a href="/downloads/">Download</a></li>
         <li><a href="/community">Community</a></li>
       </ul>

Added: incubator/aurora/site/source/documentation/latest.html.md
URL: http://svn.apache.org/viewvc/incubator/aurora/site/source/documentation/latest.html.md?rev=1581246&view=auto
==============================================================================
--- incubator/aurora/site/source/documentation/latest.html.md (added)
+++ incubator/aurora/site/source/documentation/latest.html.md Tue Mar 25 06:10:05 2014
@@ -0,0 +1,20 @@
+# Overview
+
+*Aurora* is a service scheduler that schedules jobs onto *Mesos*, which runs tasks at a specified cluster. Typical services consist of up to hundreds of task replicas.
+
+Aurora provides a *Job* abstraction consisting of a *Task* template and instructions for creating near-identical replicas of that Task (modulo things like "instance id" or specific port numbers which may differ from machine to machine).
+
+*Terminology Note*: *Replicas* are also referred to as *shards* and *instances*. While there is a general desire to move to using "instances", "shard" is still found in commands and help strings.
+
+Typically a Task is a single *Process* corresponding to a single command line, such as `python2.6 my_script.py`. However, sometimes you must colocate separate Processes together within a single Task, which runs within a single container and `chroot`, often referred to as a "sandbox". For example, if you run multiple cooperating agents together such as `logrotate`, `installer`, and master or slave processes. *Thermos* provides a Process abstraction under the Mesos Tasks.
+
+To use and get up to speed on Aurora, you should look the docs in this directory in this order:
+
+1. How to [deploy Aurora](/documentation/latest/deploying-aurora-scheduler/) or, how to [install Aurora on virtual machines on your private machine](/documentation/latest/vagrant/) (the Tutorial uses the virtual machine approach).
+2. As a user, get started quickly with a [Tutorial](/documentation/latest/tutorial/).
+3. For an overview of Aurora's process flow under the hood, see the [User Guide](/documentation/latest/userguide/).
+4. To learn how to write a configuration file, look at our [Configuration Tutorial](/documentation/latest/configurationtutorial/). From there, look at the [Aurora + Thermos Reference](/documentation/latest/configurationreference/).
+5. Then read up on the [Aurora Command Line Client](/documentation/latest/clientcommands/).
+6. Find out general information and useful tips about how Aurora does [Resource Isolation](/documentation/latest/resourceisolation/).
+
+To contact the Aurora Developer List, email [dev@aurora.incubator.apache.org](mailto:dev@aurora.incubator.apache.org). You may want to read the list [archives](http://mail-archives.apache.org/mod_mbox/incubator-aurora-dev/). You can also use the IRC channel `#aurora` on `irc.freenode.net`

Added: incubator/aurora/site/source/documentation/latest/clientcommands.md
URL: http://svn.apache.org/viewvc/incubator/aurora/site/source/documentation/latest/clientcommands.md?rev=1581246&view=auto
==============================================================================
--- incubator/aurora/site/source/documentation/latest/clientcommands.md (added)
+++ incubator/aurora/site/source/documentation/latest/clientcommands.md Tue Mar 25 06:10:05 2014
@@ -0,0 +1,482 @@
+Aurora Client Commands
+======================
+
+The most up-to-date reference is in the client itself: use the
+`aurora help` subcommand (for example, `aurora help` or
+`aurora help create`) to find the latest information on parameters and
+functionality. Note that `aurora help open` does not work, due to underlying issues with
+reflection.
+
+- [Introduction](#introduction)
+- [Cluster Configuration](#cluster-configuration)
+- [Job Keys](#job-keys)
+- [Modifying Aurora Client Commands](#modifying-aurora-client-commands)
+- [Regular Jobs](#regular-jobs)
+    - [Creating and Running a Job](#creating-and-running-a-job)
+    - [Running a Command On a Running Job](#running-a-command-on-a-running-job)
+    - [Killing a Job](#killing-a-job)
+    - [Updating a Job](#updating-a-job)
+    - [Renaming a Job](#renaming-a-job)
+    - [Restarting Jobs](#restarting-jobs)
+- [Cron Jobs](#cron-jobs)
+- [Comparing Jobs](#comparing-jobs)
+- [Viewing/Examining Jobs](#viewingexamining-jobs)
+    - [Listing Jobs](#listing-jobs)
+    - [Inspecting a Job](#inspecting-a-job)
+    - [Versions](#versions)
+    - [Checking Your Quota](#checking-your-quota)
+    - [Finding a Job on Web UI](#finding-a-job-on-web-ui)
+    - [Getting Job Status](#getting-job-status)
+    - [Opening the Web UI](#opening-the-web-ui)
+    - [SSHing to a Specific Task Machine](#sshing-to-a-specific-task-machine)
+    - [Templating Command Arguments](#templating-command-arguments)
+
+Introduction
+------------
+
+Once you have written an `.aurora` configuration file that describes
+your Job and its parameters and functionality, you interact with Aurora
+using Aurora Client commands. This document describes all of these commands
+and how and when to use them. All Aurora Client commands start with
+`aurora`, followed by the name of the specific command and its
+arguments.
+
+*Job keys* are a very common argument to Aurora commands, as well as the
+gateway to useful information about a Job. Before using Aurora, you
+should read the next section which describes them in detail. The section
+after that briefly describes how you can modify the behavior of certain
+Aurora Client commands, linking to a detailed document about how to do
+that.
+
+This is followed by the Regular Jobs section, which describes the basic
+Client commands for creating, running, and manipulating Aurora Jobs.
+After that are sections on Comparing Jobs and Viewing/Examining Jobs. In
+other words, various commands for getting information and metadata about
+Aurora Jobs.
+
+Cluster Configuration
+---------------------
+
+The client must be able to find a configuration file that speciies available clusters. This file
+declares shorthand names for clusters, which are in turn referenced by job configuration files
+and client commands.
+
+The client will load at most two configuration files, making both of their defined clusters
+available. The first is intended to be a system-installed cluster, using the path specified in
+the environment variable `AURORA_CONFIG_ROOT`, defaulting to `/etc/aurora/clusters.json` if the
+environment variable is not set. The second is a user-installed file, located at
+`~/.aurora/clusters.json`.
+
+A cluster configuration is formatted as JSON.  The simplest cluster configuration is one that
+communicates with a single (non-leader-elected) scheduler.  For example:
+
+```javascript
+[{
+  "name": "example",
+  "scheduler_uri": "localhost:55555",
+}]
+```
+
+A configuration for a leader-elected scheduler would contain something like:
+
+```javascript
+[{
+  "name": "example",
+  "zk": "192.168.33.2",
+  "scheduler_zk_path": "/aurora/scheduler"
+}]
+```
+
+Job Keys
+--------
+
+A job key is a unique system-wide identifier for an Aurora-managed
+Job, for example `cluster1/web-team/test/experiment204`. It is a 4-tuple
+consisting of, in order, *cluster*, *role*, *environment*, and
+*jobname*, separated by /s. Cluster is the name of an Aurora
+cluster. Role is the Unix service account under which the Job
+runs. Environment is a namespace component like `devel`, `test`,
+`prod`, or `stagingN.` Jobname is the Job's name.
+
+The combination of all four values uniquely specifies the Job. If any
+one value is different from that of another job key, the two job keys
+refer to different Jobs. For example, job key
+`cluster1/tyg/prod/workhorse` is different from
+`cluster1/tyg/prod/workcamel` is different from
+`cluster2/tyg/prod/workhorse` is different from
+`cluster2/foo/prod/workhorse` is different from
+`cluster1/tyg/test/workhorse.`
+
+Role names are user accounts existing on the slave machines. If you don't know what accounts
+are available, contact your sysadmin.
+
+Environment names are namespaces; you can count on `prod`, `devel` and `test` existing.
+
+Modifying Aurora Client Commands
+--------------------------------
+
+For certain Aurora Client commands, you can define hook methods that run
+either before or after an action that takes place during the command's
+execution, as well as based on whether the action finished successfully or failed
+during execution. Basically, a hook is code that lets you extend the
+command's actions. The hook executes on the client side, specifically on
+the machine executing Aurora commands.
+
+Hooks can be associated with these Aurora Client commands.
+
+  - `cancel_update`
+  - `create`
+  - `kill`
+  - `restart`
+  - `update`
+
+The process for writing and activating them is complex enough
+that we explain it in a devoted document, [Hooks for Aurora Client API](/documentation/latest/hooks/).
+
+Regular Jobs
+------------
+
+This section covers Aurora commands related to running, killing,
+renaming, updating, and restarting a basic Aurora Job.
+
+### Creating and Running a Job
+
+    aurora create <job key> <configuration file>
+
+Creates and then runs a Job with the specified job key based on a `.aurora` configuration file.
+The configuration file may also contain and activate hook definitions.
+
+`create` can take four named parameters:
+
+- `-E NAME=VALUE` Bind a Thermos mustache variable name to a
+  value. Multiple flags specify multiple values. Defaults to `[]`.
+- ` -o, --open_browser` Open a browser window to the scheduler UI Job
+  page after a job changing operation happens. When `False`, the Job
+  URL prints on the console and the user has to copy/paste it
+  manually. Defaults to `False`. Does not work when running in Vagrant.
+- ` -j, --json` If specified, configuration argument is read as a
+  string in JSON format. Defaults to False.
+- ` --wait_until=STATE` Block the client until all the Tasks have
+  transitioned into the requested state. Possible values are: `PENDING`,
+  `RUNNING`, `FINISHED`. Default: `PENDING`
+
+### Running a Command On a Running Job
+
+    aurora run <job_key> <cmd>
+
+Runs a shell command on all machines currently hosting shards of a
+single Job.
+
+`run` supports the same command line wildcards used to populate a Job's
+commands; i.e. anything in the `{{mesos.*}}` and `{{thermos.*}}`
+namespaces.
+
+`run` can take three named parameters:
+
+- `-t NUM_THREADS`, `--threads=NUM_THREADS `The number of threads to
+  use, defaulting to `1`.
+- `--user=SSH_USER` ssh as this user instead of the given role value.
+  Defaults to None.
+- `-e, --executor_sandbox`  Run the command in the executor sandbox
+  instead of the Task sandbox. Defaults to False.
+
+### Killing a Job
+
+    aurora kill <job key> <configuration file>
+
+Kills all Tasks associated with the specified Job, blocking until all
+are terminated. Defaults to killing all shards in the Job.
+
+The `<configuration file>` argument for `kill` is optional. Use it only
+if it contains hook definitions and activations that affect the
+kill command.
+
+`kill` can take two named parameters:
+
+- `-o, --open_browser` Open a browser window to the scheduler UI Job
+  page after a job changing operation happens. When `False`, the Job
+  URL prints on the console and the user has to copy/paste it
+  manually. Defaults to `False`. Does not work when running in Vagrant.
+- `--shards=SHARDS` A list of shard ids to act on. Can either be a
+  comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
+  combination of the two (e.g. 0-2,5,7-9). Defaults to acting on all
+  shards.
+
+### Updating a Job
+
+    aurora update [--shards=ids] <job key> <configuration file>
+    aurora cancel_update <job key> <configuration file>
+
+Given a running job, does a rolling update to reflect a new
+configuration version. Only updates Tasks in the Job with a changed
+configuration. You can further restrict the operated on Tasks by
+using `--shards` and specifying a comma-separated list of job shard ids.
+
+You may want to run `aurora diff` beforehand to validate which Tasks
+have different configurations.
+
+Updating jobs are locked to be sure the update finishes without
+disruption. If the update abnormally terminates, the lock may stay
+around and cause failure of subsequent update attempts.
+ `aurora cancel_update `unlocks the Job specified by
+its `job_key` argument. Be sure you don't issue `cancel_update` when
+another user is working with the specified Job.
+
+The `<configuration file>` argument for `cancel_update` is optional. Use
+it only if it contains hook definitions and activations that affect the
+`cancel_update` command. The `<configuration file>` argument for
+`update` is required, but in addition to a new configuration it can be
+used to define and activate hooks for `update`.
+
+`update` can take four named parameters:
+
+- `--shards=SHARDS` A list of shard ids to update. Can either be a
+  comma-separated list (e.g. 0,1,2) or a range (e.g. 0-2) or  any
+  combination of the two (e.g. 0-2,5,7-9). If not  set, all shards are
+  acted on. Defaults to None.
+- `-E NAME=VALUE` Binds a Thermos mustache variable name to a value.
+  Use multiple flags to specify multiple values. Defaults to `[]`.
+- `-j, --json` If specified, configuration is read in JSON format.
+  Defaults to `False`.
+- `--updater_health_check_interval_seconds=HEALTH_CHECK_INTERVAL_SECONDS`
+  Time interval between subsequent shard status checks. Defaults to `3`.
+
+### Renaming a Job
+
+Renaming is a tricky operation as downstream clients must be informed of
+the new name. A conservative approach
+to renaming suitable for production services is:
+
+1.  Modify the Aurora configuration file to change the role,
+    environment, and/or name as appropriate to the standardized naming
+    scheme.
+2.  Check that only these naming components have changed
+    with `aurora diff`.
+
+        aurora diff <job_key> <job_configuration>
+
+3.  Create the (identical) job at the new key. You may need to request a
+    temporary quota increase.
+
+        aurora create <new_job_key> <job_configuration>
+
+4.  Migrate all clients over to the new job key. Update all links and
+    dashboards. Ensure that both job keys run identical versions of the
+    code while in this state.
+5.  After verifying that all clients have successfully moved over, kill
+    the old job.
+
+        aurora kill <old_job_key>
+
+6.  If you received a temporary quota increase, be sure to let the
+    powers that be know you no longer need the additional capacity.
+
+### Restarting Jobs
+
+`restart` restarts all of a job key identified Job's shards:
+
+    aurora restart <job_key> <configuration file>
+
+Restarts are controlled on the client side, so aborting
+the `restart` command halts the restart operation.
+
+`restart` does a rolling restart. You almost always want to do this, but
+not if all shards of a service are misbehaving and are
+completely dysfunctional. To not do a rolling restart, use
+the `-shards` option described below.
+
+**Note**: `restart` only applies its command line arguments and does not
+use or is affected by `update.config`. Restarting
+does ***not*** involve a configuration change. To update the
+configuration, use `update.config`.
+
+The `<configuration file>` argument for restart is optional. Use it only
+if it contains hook definitions and activations that affect the
+`restart` command.
+
+In addition to the required job key argument, there are eight
+`restart` specific optional arguments:
+
+- `--updater_health_check_interval_seconds`: Defaults to `3`, the time
+  interval between subsequent shard status checks.
+- `--shards=SHARDS`: Defaults to None, which restarts all shards.
+  Otherwise, only the specified-by-id shards restart. They can be
+  comma-separated `(0, 8, 9)`, a range `(3-5)` or a
+  combination `(0, 3-5, 8, 9-11)`.
+- `--batch_size`: Defaults to `1`, the number of shards to be started
+  in one iteration. So, for example, for value 3, it tries to restart
+  the first three shards specified by `--shards` simultaneously, then
+  the next three, and so on.
+- `--max_per_shard_failures=MAX_PER_SHARD_FAILURES`: Defaults to `0`,
+  the maximum number of restarts per shard during restart. When
+  exceeded, it increments the total failure count.
+- `--max_total_failures=MAX_TOTAL_FAILURES`: Defaults to `0`, the
+  maximum total number of shard failures tolerated during restart.
+- `-o, --open_browser` Open a browser window to the scheduler UI Job
+  page after a job changing operation happens. When `False`, the Job
+  url prints on the console and the user has to copy/paste it
+  manually. Defaults to `False`. Does not work when running in Vagrant.
+- `--restart_threshold`: Defaults to `60`, the maximum number of
+  seconds before a shard must move into the `RUNNING` state before
+  it's considered a failure.
+- `--watch_secs`: Defaults to `30`, the minimum number of seconds a
+  shard must remain in `RUNNING` state before considered a success.
+
+Cron Jobs
+---------
+
+You will see various commands and options relating to cron jobs in
+`aurora -help` and similar. Ignore them, as they're not yet implemented.
+You might be able to use them without causing an error, but nothing happens
+if you do.
+
+Comparing Jobs
+--------------
+
+    aurora diff <job_key> config
+
+Compares a job configuration against a running job. By default the diff
+is determined using `diff`, though you may choose an alternate
+ diff program by specifying the `DIFF_VIEWER` environment variable.
+
+There are two named parameters:
+
+- `-E NAME=VALUE` Bind a Thermos mustache variable name to a
+  value. Multiple flags may be used to specify multiple values.
+  Defaults to `[]`.
+- `-j, --json` Read the configuration argument in JSON format.
+  Defaults to `False`.
+
+Viewing/Examining Jobs
+----------------------
+
+Above we discussed creating, killing, and updating Jobs. Here we discuss
+how to view and examine Jobs.
+
+### Listing Jobs
+
+    aurora list_jobs
+    Usage: `aurora list_jobs cluster/role
+
+Lists all Jobs registered with the Aurora scheduler in the named cluster for the named role.
+
+It has two named parameters:
+
+- `--pretty`: Displays job information in prettyprinted format.
+  Defaults to `False`.
+- `-c`, `--show-cron`: Shows cron schedule for jobs. Defaults to
+  `False`. Do not use, as it's not yet implemented.
+
+### Inspecting a Job
+
+    aurora inspect <job_key> config
+
+`inspect` verifies that its specified job can be parsed from a
+configuration file, and displays the parsed configuration. It has four
+named parameters:
+
+- `--local`: Inspect the configuration that the  `spawn` command would
+  create, defaulting to `False`.
+- `--raw`: Shows the raw configuration. Defaults to `False`.
+- `-j`, `--json`: If specified, configuration is read in JSON format.
+  Defaults to `False`.
+- `-E NAME=VALUE`: Bind a Thermos Mustache variable name to a value.
+  You can use multiple flags to specify multiple values. Defaults
+  to `[]`
+
+### Versions
+
+    aurora version
+
+Lists client build information and what Aurora API version it supports.
+
+### Checking Your Quota
+
+    aurora get_quota --cluster=CLUSTER role
+
+  Prints the production quota allocated to the role's value at the given
+cluster.
+
+### Finding a Job on Web UI
+
+When you create a job, part of the output response contains a URL that goes
+to the job's scheduler UI page. For example:
+
+    vagrant@precise64:~$ aurora create example/www-data/prod/hello /vagrant/examples/jobs/hello_world.aurora
+    INFO] Creating job hello
+    INFO] Response from scheduler: OK (message: 1 new tasks pending for job www-data/prod/hello)
+    INFO] Job url: http://precise64:8081/scheduler/www-data/prod/hello
+
+You can go to the scheduler UI page for this job via `http://precise64:8081/scheduler/www-data/prod/hello`
+You can go to the overall scheduler UI page by going to the part of that URL that ends at `scheduler`; `http://precise64:8081/scheduler`
+
+Once you click through to a role page, you see Jobs arranged
+separately by pending jobs, active jobs and finished jobs.
+Jobs are arranged by role, typically a service account for
+production jobs and user accounts for test or development jobs.
+
+### Getting Job Status
+
+    aurora status <job_key>
+
+Returns the status of recent tasks associated with the
+`job_key` specified Job in its supplied cluster. Typically this includes
+a mix of active tasks (running or assigned) and inactive tasks
+(successful, failed, and lost.)
+
+### Opening the Web UI
+
+Use the Job's web UI scheduler URL or the `aurora status` command to find out on which
+machines individual tasks are scheduled. You can open the web UI via the
+`open` command line command if invoked from your machine:
+
+    aurora open [<cluster>[/<role>[/<env>/<job_name>]]]
+
+If only the cluster is specified, it goes directly to that cluster's
+scheduler main page. If the role is specified, it goes to the top-level
+role page. If the full job key is specified, it goes directly to the job
+page where you can inspect individual tasks.
+
+### SSHing to a Specific Task Machine
+
+    aurora ssh <job_key> <shard number>
+
+You can have the Aurora client ssh directly to the machine that has been
+assigned a particular Job/shard number. This may be useful for quickly
+diagnosing issues such as performance issues or abnormal behavior on a
+particular machine.
+
+It can take three named parameters:
+
+- `-e`, `--executor_sandbox`:  Run `ssh` in the executor sandbox
+  instead of the  task sandbox. Defaults to `False`.
+- `--user=SSH_USER`: `ssh` as the given user instead of as the role in
+  the `job_key` argument. Defaults to none.
+- `-L PORT:NAME`: Add tunnel from local port `PORT` to the remote
+  named port  `NAME`. Defaults to `[]`.
+
+### Templating Command Arguments
+
+    aurora run [-e] [-t THREADS] <job_key> -- <<command-line>>
+
+Given a job specification, run the supplied command on all hosts and
+return the output. You may use the standard Mustache templating rules:
+
+- `{{thermos.ports[name]}}` substitutes the specific named port of the
+  task assigned to this machine
+- `{{mesos.instance}}` substitutes the shard id of the job's task
+  assigned to this machine
+- `{{thermos.task_id}}` substitutes the task id of the job's task
+  assigned to this machine
+
+For example, the following type of pattern can be a powerful diagnostic
+tool:
+
+    aurora run -t5 cluster1/tyg/devel/seizure -- \
+      'curl -s -m1 localhost:{{thermos.ports[http]}}/vars | grep uptime'
+
+By default, the command runs in the Task's sandbox. The `-e` option can
+run the command in the executor's sandbox. This is mostly useful for
+Aurora administrators.
+
+You can parallelize the runs by using the `-t` option.

Added: incubator/aurora/site/source/documentation/latest/configurationreference.md
URL: http://svn.apache.org/viewvc/incubator/aurora/site/source/documentation/latest/configurationreference.md?rev=1581246&view=auto
==============================================================================
--- incubator/aurora/site/source/documentation/latest/configurationreference.md (added)
+++ incubator/aurora/site/source/documentation/latest/configurationreference.md Tue Mar 25 06:10:05 2014
@@ -0,0 +1,735 @@
+Aurora + Thermos Configuration Reference
+========================================
+
+- [Aurora + Thermos Configuration Reference](#aurora--thermos-configuration-reference)
+- [Introduction](#introduction)
+- [Process Schema](#process-schema)
+    - [Process Objects](#process-objects)
+      - [name](#name)
+      - [cmdline](#cmdline)
+      - [max_failures](#max_failures)
+      - [daemon](#daemon)
+      - [ephemeral](#ephemeral)
+      - [min_duration](#min_duration)
+      - [final](#final)
+- [Task Schema](#task-schema)
+    - [Task Object](#task-object)
+      - [name](#name-1)
+      - [processes](#processes)
+        - [constraints](#constraints)
+      - [resources](#resources)
+      - [max_failures](#max_failures-1)
+      - [max_concurrency](#max_concurrency)
+      - [finalization_wait](#finalization_wait)
+    - [Constraint Object](#constraint-object)
+    - [Resource Object](#resource-object)
+- [Job Schema](#job-schema)
+    - [Job Objects](#job-objects)
+    - [Services](#services)
+    - [UpdateConfig Objects](#updateconfig-objects)
+    - [HealthCheckConfig Objects](#healthcheckconfig-objects)
+- [Specifying Scheduling Constraints](#specifying-scheduling-constraints)
+- [Template Namespaces](#template-namespaces)
+    - [mesos Namespace](#mesos-namespace)
+    - [thermos Namespace](#thermos-namespace)
+- [Basic Examples](#basic-examples)
+    - [hello_world.aurora](#hello_worldaurora)
+    - [Environment Tailoring](#environment-tailoring)
+      - [hello_world_productionized.aurora](#hello_world_productionizedaurora)
+
+Introduction
+============
+
+Don't know where to start? The Aurora configuration schema is very
+powerful, and configurations can become quite complex for advanced use
+cases.
+
+For examples of simple configurations to get something up and running
+quickly, check out the [Tutorial](/documentation/latest/tutorial/). When you feel comfortable with the basics, move
+on to the [Configuration Tutorial](/documentation/latest/configurationtutorial/) for more in-depth coverage of
+configuration design.
+
+For additional basic configuration examples, see [the end of this document](#BasicExamples).
+
+Process Schema
+==============
+
+Process objects consist of required `name` and `cmdline` attributes. You can customize Process
+behavior with its optional attributes. Remember, Processes are handled by Thermos.
+
+### Process Objects
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>Attribute Name</td>
+    <th>Type</td>
+    <th>Description</td>
+  </tr>
+  <tr>
+    <td><code><b>name</b></code></td>
+    <td>String</td>
+    <td>Process name (Required)</td>
+  </tr>
+  <tr>
+    <td><code>cmdline</code></td>
+    <td>String</td>
+    <td>Command line (Required)</td>
+  </tr>
+  <tr>
+    <td><code>max_failures</code></td>
+    <td>Integer</td>
+    <td>Maximum process failures (Default: 1)</td>
+  </tr>
+  <tr>
+    <td><code>daemon</code></td>
+    <td>Boolean</td>
+    <td>When True, this is a daemon process. (Default: False)</td>
+  </tr>
+  <tr>
+    <td><code>ephemeral</code></td>
+    <td>Boolean</td>
+    <td>When True, this is an ephemeral process. (Default: False)</td>
+  </tr>
+    <td><code>min_duration</code></td>
+    <td>Integer</td>
+    <td>Minimum duration between process restarts in seconds. (Default: 15)</td>
+  </tr>
+  <tr>
+    <td><code>final</code></td>
+    <td>Boolean</td>
+    <td>When True, this process is a finalizing one that should run last. (Default: False)</td>
+  </tr>
+</tbody>
+</table>
+
+#### name
+
+The name is any valid UNIX filename string (specifically no
+slashes, NULLs or leading periods). Within a Task object, each Process name
+must be unique.
+
+#### cmdline
+
+The command line run by the process. The command line is invoked in a bash
+subshell, so can involve fully-blown bash scripts. However, nothing is
+supplied for command-line arguments so `$*` is unspecified.
+
+#### max_failures
+
+The maximum number of failures (non-zero exit statuses) this process can
+have before being marked permanently failed and not retried. If a
+process permanently fails, Thermos looks at the failure limit of the task
+containing the process (usually 1) to determine if the task has
+failed as well.
+
+Setting `max_failures` to 0 makes the process retry
+indefinitely until it achieves a successful (zero) exit status.
+It retries at most once every `min_duration` seconds to prevent
+an effective denial of service attack on the coordinating Thermos scheduler.
+
+#### daemon
+
+By default, Thermos processes are non-daemon. If `daemon` is set to True, a
+successful (zero) exit status does not prevent future process runs.
+Instead, the process reinvokes after `min_duration` seconds.
+However, the maximum failure limit still applies. A combination of
+`daemon=True` and `max_failures=0` causes a process to retry
+indefinitely regardless of exit status. This should be avoided
+for very short-lived processes because of the accumulation of
+checkpointed state for each process run. When running in Mesos
+specifically, `max_failures` is capped at 100.
+
+#### ephemeral
+
+By default, Thermos processes are non-ephemeral. If `ephemeral` is set to
+True, the process' status is not used to determine if its containing task
+has completed. For example, consider a task with a non-ephemeral
+webserver process and an ephemeral logsaver process
+that periodically checkpoints its log files to a centralized data store.
+The task is considered finished once the webserver process has
+completed, regardless of the logsaver's current status.
+
+#### min_duration
+
+Processes may succeed or fail multiple times during a single task's
+duration. Each of these is called a *process run*. `min_duration` is
+the minimum number of seconds the scheduler waits before running the
+same process.
+
+#### final
+
+Processes can be grouped into two classes: ordinary processes and
+finalizing processes. By default, Thermos processes are ordinary. They
+run as long as the task is considered healthy (i.e., no failure
+limits have been reached.) But once all regular Thermos processes
+finish or the task reaches a certain failure threshold, it
+moves into a "finalization" stage and runs all finalizing
+processes. These are typically processes necessary for cleaning up the
+task, such as log checkpointers, or perhaps e-mail notifications that
+the task completed.
+
+Finalizing processes may not depend upon ordinary processes or
+vice-versa, however finalizing processes may depend upon other
+finalizing processes and otherwise run as a typical process
+schedule.
+
+Task Schema
+===========
+
+Tasks fundamentally consist of a `name` and a list of Process objects stored as the
+value of the `processes` attribute. Processes can be further constrained with
+`constraints`. By default, `name`'s value inherits from the first Process in the
+`processes` list, so for simple `Task` objects with one Process, `name`
+can be omitted. In Mesos, `resources` is also required.
+
+### Task Object
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>param</td>
+    <th>type</td>
+    <th>description</td>
+  </tr>
+  <tr>
+    <td><code><b>name</b></code></td>
+    <td>String</td>
+    <td>Process name (Required) (Default: <code>{{processes0.name}})</td>
+  </tr>
+  <tr>
+    <td><code><b>processes<b></code></td>
+    <td>List of <code>Process</code> objects</td>
+    <td>List of <code>Process</code> objects bound to this task. (Required)</td>
+  </tr>
+  <tr>
+    <td><code>constraints</code></td>
+    <td>List of <code>Constraint<code> objects</td>
+    <td>List of <code>Constraint</code> objects constraining processes.</td>
+  </tr>
+  <tr>
+    <td><code><b>resources</b></code></td>
+    <td><code>Resource</code> object</td>
+    <td>Resource footprint. (Required)</td>
+  </tr>
+  <tr>
+    <td><code>max_failures</code></td>
+    <td>Integer</td>
+    <td>Maximum process failures before being considered failed (Default: 1)</td>
+  </tr>
+    <td><code>max_concurrency</code></td>
+    <td>Integer</td>
+    <td>Maximum number of concurrent processes (Default: 0, unlimited concurrency.)
+  </tr>
+  <tr>
+    <td><code>finalization_wait</code></td>
+    <td>Integer</td>
+    <td>Amount of time allocated for finalizing processes, in seconds. (Default: 30)
+  </tr>
+</tbody>
+</table>
+
+#### name
+
+`name` is a string denoting the name of this task. It defaults to the name of the first Process in
+the list of Processes associated with the `processes` attribute.
+
+#### processes
+
+`processes` is an unordered list of `Process` objects. To constrain the order
+in which they run, use `constraints`.
+
+##### constraints
+
+A list of `Constraint` objects. Currently it supports only one type,
+the `order` constraint. `order` is a list of process names
+that should run in the order given. For example,
+
+        process = Process(cmdline = "echo hello {{name}}")
+        task = Task(name = "echoes",
+                    processes = [process(name = "jim"), process(name = "bob")],
+                    constraints = [Constraint(order = ["jim", "bob"]))
+
+Constraints can be supplied ad-hoc and in duplicate. Not all
+Processes need be constrained, however Tasks with cycles are
+rejected by the Thermos scheduler.
+
+Use the `order` function as shorthand to generate `Constraint` lists.
+The following:
+
+        order(process1, process2)
+
+is shorthand for
+
+        [Constraint(order = [process1.name(), process2.name()])]
+
+#### resources
+
+Takes a `Resource` object, which specifies the amounts of CPU, memory, and disk space resources
+to allocate to the Task.
+
+#### max_failures
+
+`max_failures` is the number of times processes that are part of this
+Task can fail before the entire Task is marked for failure.
+
+For example:
+
+        template = Process(max_failures=10)
+        task = Task(
+          name = "fail",
+          processes = [
+             template(name = "failing", cmdline = "exit 1"),
+             template(name = "succeeding", cmdline = "exit 0")
+          ],
+          max_failures=2)
+
+The `failing` Process could fail 10 times before being marked as
+permanently failed, and the `succeeding` Process would succeed on the
+first run. The task would succeed despite only allowing for two failed
+processes. To be more specific, there would be 10 failed process runs
+yet 1 failed process.
+
+#### max_concurrency
+
+For Tasks with a number of expensive but otherwise independent
+processes, you may want to limit the amount of concurrency
+the Thermos scheduler provides rather than artificially constraining
+it via `order` constraints. For example, a test framework may
+generate a task with 100 test run processes, but wants to run it on
+a machine with only 4 cores. You can limit the amount of parallelism to
+4 by setting `max_concurrency=4` in your task configuration.
+
+For example, the following task spawns 180 Processes ("mappers")
+to compute individual elements of a 180 degree sine table, all dependent
+upon one final Process ("reducer") to tabulate the results:
+
+    def make_mapper(id):
+      return Process(
+        name = "mapper%03d" % id,
+        cmdline = "echo 'scale=50;s(%d\*4\*a(1)/180)' | bc -l >
+                   temp.sine_table.%03d" % (id, id))
+
+    def make_reducer():
+      return Process(name = "reducer", cmdline = "cat temp.\* | nl \> sine\_table.txt
+                     && rm -f temp.\*")
+
+    processes = map(make_mapper, range(180))
+
+    task = Task(
+      name = "mapreduce",
+      processes = processes + [make\_reducer()],
+      constraints = [Constraint(order = [mapper.name(), 'reducer']) for mapper
+                     in processes],
+      max_concurrency = 8)
+
+#### finalization_wait
+
+Tasks have three active stages: `ACTIVE`, `CLEANING`, and `FINALIZING`. The
+`ACTIVE` stage is when ordinary processes run. This stage lasts as
+long as Processes are running and the Task is healthy. The moment either
+all Processes have finished successfully or the Task has reached a
+maximum Process failure limit, it goes into `CLEANING` stage and send
+SIGTERMs to all currently running Processes and their process trees.
+Once all Processes have terminated, the Task goes into `FINALIZING` stage
+and invokes the schedule of all Processes with the "final" attribute set to True.
+
+This whole process from the end of `ACTIVE` stage to the end of `FINALIZING`
+must happen within `finalization_wait` seconds. If it does not
+finish during that time, all remaining Processes are sent SIGKILLs
+(or if they depend upon uncompleted Processes, are
+never invoked.)
+
+Client applications with higher priority may force a shorter
+finalization wait (e.g. through parameters to `thermos kill`), so this
+is mostly a best-effort signal.
+
+### Constraint Object
+
+Current constraint objects only support a single ordering constraint, `order`,
+which specifies its processes run sequentially in the order given. By
+default, all processes run in parallel when bound to a `Task` without
+ordering constraints.
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>param</td>
+    <th>type</td>
+    <th>description</td>
+  </tr>
+  <tr>
+    <td><code><b>order</b></code></td>
+    <td>List of String</td>
+    <td> List of processes by name (String) that should be run serially.</td>
+  </tr>
+  </tbody>
+</table>
+
+### Resource Object
+
+Specifies the amount of CPU, Ram, and disk resources the task needs. See the
+[Resource Isolation document](/documentation/latest/resourceisolation/) for suggested values and to understand how
+resources are allocated.
+
+<table border=1>
+  <tbody>
+  <tr>
+   <th>param</td>
+    <th>type</td>
+    <th>description</td>
+  </tr>
+  <tr>
+    <td><code>cpu</code></td>
+    <td>Float</td>
+    <td>Fractional number of cores required by the task.</td>
+  </tr>
+  <tr>
+    <td><code>ram</code></td>
+    <td>Integer</td>
+    <td>Bytes of RAM required by the task.</td>
+  </tr>
+  <tr>
+    <td><code>disk</code></td>
+    <td>Integer</td>
+    <td>Bytes of disk required by the task.</td>
+  </tr>
+  </tbody>
+</table>
+
+Job Schema
+==========
+
+### Job Objects
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>task</code></td>
+      <td>Task</td>
+      <td>The Task object to bind to this job. Required.</td></td>
+    </tr>
+    <tr>
+      <td><code>name</code></td>
+      <td>String</td>
+      <td>Job name. (Default: inherited from the task attribute's name)</td>
+    </tr>
+    <tr>
+      <td><code>role</code></td>
+      <td>String</td>
+      <td>Job role account. Required.</td>
+    </tr>
+    <tr>
+      <td><code>cluster</code></td>
+      <td>String</td>
+      <td>Cluster in which this job is scheduled. Required.</td>
+    </tr>
+    <tr>
+      <td><code>environment</code></td></td>
+      <td>String</td></td>
+      <td>Job environment, default <code>devel</code>. Must be one of <code>prod</code>, <code>devel</code>, <code>test</code> or <code>staging&lt;number&gt;</code>.</td>
+    </tr>
+    <tr>
+      <td><code>contact</code></td>
+      <td>String</td>
+      <td>Best email address to reach the owner of the job. For production jobs, this is usually a team mailing list.</td>
+    </tr>
+    <tr>
+      <td><code>instances</code></td>
+      <td>Integer</td>
+      <td>Number of instances (sometimes referred to as replicas or shards) of the task to create. (Default: 1)</td>
+    </tr>
+    <tr>
+      <td><code>cron_schedule</code> <strong>(Present, but not supported and a no-op)</strong></td>
+      <td>String</td>
+      <td>UTC Cron schedule in cron format. May only be used with non-service jobs. Default: None (not a cron job.)</td>
+    </tr>
+    <tr>
+      <td><code>cron_collision_policy</code> <strong>(Present, but not supported and a no-op)</strong></td>
+      <td>String</td>
+      <td>Policy to use when a cron job is triggered while a previous run is still active. KILL_EXISTING Kill the previous run, and schedule the new run CANCEL_NEW Let the previous run continue, and cancel the new run. RUN_OVERLAP Let the previous run continue, and schedule the new run. (Default: KILL_EXISTING)</td>
+    </tr>
+    <tr>
+      <td><code>update_config</code></td>
+      <td><code>update_config</code> object</td>
+      <td>Parameters for controlling the rate and policy of rolling updates. </td>
+    </tr>
+    <tr>
+      <td><code>constraints</code></td>
+      <td>dict</td>
+      <td>Scheduling constraints for the tasks. See the section on the <a href="#SchedulingConstraints">constraint specification language</a></td>
+    </tr>
+    <tr>
+      <td><code>service</code></td>
+      <td>Boolean</td>
+      <td>If True, restart tasks regardless of success or failure. (Default: False)</td>
+    </tr>
+    <tr>
+      <td><code>daemon</code></td>
+      <td>Boolean</td>
+      <td>A DEPRECATED alias for "service". (Default: False) </td>
+    </tr>
+    <tr>
+      <td><code>max_task_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of failures after which the task is considered to have failed (Default: 1) Set to -1 to allow for infinite failures</td>
+    </tr>
+    <tr>
+      <td><code>priority</code></td>
+      <td>Integer</td>
+      <td>Preemption priority to give the task (Default 0). Tasks with higher priorities may preempt tasks at lower priorities.</td>
+    </tr>
+    <tr>
+      <td><code>production</code></td>
+      <td>Boolean</td>
+      <td>Whether or not this is a production task backed by quota (Default: False) Production jobs may preempt any non-production job, and may only be preempted by production jobs in the same role and of higher priority. To run jobs at this level, the job role must have the appropriate quota.</td>
+    </tr>
+    <tr>
+      <td><code>health_check_config</code></td>
+      <td><code>heath_check_config</code> object</td>
+      <td>Parameters for controlling a task's health checks via HTTP. Only used if a  health port was assigned with a command line wildcard.</td>
+    </tr>
+  </tbody>
+</table>
+
+### Services
+
+Jobs with the `service` flag set to True are called Services. The `Service`
+alias can be used as shorthand for `Job` with `service=True`.
+Services are differentiated from non-service Jobs in that tasks
+always restart on completion, whether successful or unsuccessful.
+Jobs without the service bit set only restart up to
+`max_task_failures` times and only if they terminated unsuccessfully
+either due to human error or machine failure.
+
+### UpdateConfig Objects
+
+Parameters for controlling the rate and policy of rolling updates.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>batch_size</code></td>
+      <td>Integer</td>
+      <td>Maximum number of shards to be updated in one iteration (Default: 1)</td>
+    </tr>
+    <tr>
+      <td><code>restart_threshold</code></td>
+      <td>Integer</td>
+      <td>Maximum number of seconds before a shard must move into the <code>RUNNING</code> state before considered a failure (Default: 60)</td>
+    </tr>
+    <tr>
+      <td><code>watch_secs</code></td>
+      <td>Integer</td>
+      <td>Minimum number of seconds a shard must remain in <code>RUNNING</code> state before considered a success (Default: 30)</td>
+    </tr>
+    <tr>
+      <td><code>max_per_shard_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of restarts per shard during update. Increments total failure count when this limit is exceeded. (Default: 0)</td>
+    </tr>
+    <tr>
+      <td><code>max_total_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of shard failures to be tolerated in total during an update. Cannot be greater than or equal to the total number of tasks in a job. (Default: 0)</td>
+    </tr>
+  </tbody>
+</table>
+
+### HealthCheckConfig Objects
+
+Parameters for controlling a task's health checks via HTTP.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>initial_interval_secs</code></td>
+      <td>Integer</td>
+      <td>Initial delay for performing an HTTP health check. (Default: 60)</td>
+    </tr>
+    <tr>
+      <td><code>interval_secs</code></td>
+      <td>Integer</td>
+      <td>Interval on which to check the task's health via HTTP. (Default: 30)</td>
+    </tr>
+    <tr>
+      <td><code>timeout_secs</code></td>
+      <td>Integer</td>
+      <td>HTTP request timeout. (Default: 1)</td>
+    </tr>
+    <tr>
+      <td><code>max_consecutive_failures</code></td>
+      <td>Integer</td>
+      <td>Maximum number of consecutive failures that tolerated before considering a task unhealthy (Default: 0)</td>
+     </tr>
+    </tbody>
+</table>
+
+Specifying Scheduling Constraints
+=================================
+
+Most users will not need to specify constraints explicitly, as the
+scheduler automatically inserts reasonable defaults that attempt to
+ensure reliability without impacting schedulability. For example, the
+scheduler inserts a `host: limit:1` constraint, ensuring
+that your shards run on different physical machines. Please do not
+set this field unless you are sure of what you are doing.
+
+In the `Job` object there is a map `constraints` from String to String
+allowing the user to tailor the schedulability of tasks within the job.
+
+Each slave in the cluster is assigned a set of string-valued
+key/value pairs called attributes. For example, consider the host
+`cluster1-aaa-03-sr2` and its following attributes (given in key:value
+format): `host:cluster1-aaa-03-sr2` and `rack:aaa`.
+
+The constraint map's key value is the attribute name in which we
+constrain Tasks within our Job. The value is how we constrain them.
+There are two types of constraints: *limit constraints* and *value
+constraints*.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td>Limit Constraint</td>
+      <td>A string that specifies a limit for a constraint. Starts with <code>'limit:</code> followed by an Integer and closing single quote, such as <code>'limit:1'</code>.</td>
+    </tr>
+    <tr>
+      <td>Value Constraint</td>
+      <td>A string that specifies a value for a constraint. To include a list of values, separate the values using commas. To negate the values of a constraint, start with a <code>!</code><code>.</code></td>
+    </tr>
+  </tbody>
+</table>
+
+You can also control machine diversity using constraints. The below
+constraint ensures that no more than two instances of your job may run
+on a single host. Think of this as a "group by" limit.
+
+    constraints = {
+      'host': 'limit:2',
+    }
+
+Likewise, you can use constraints to control rack diversity, e.g. at
+most one task per rack:
+
+    constraints = {
+      'rack': 'limit:1',
+    }
+
+Use these constraints sparingly as they can dramatically reduce Tasks' schedulability.
+
+Template Namespaces
+===================
+
+Currently, a few Pystachio namespaces have special semantics. Using them
+in your configuration allow you to tailor application behavior
+through environment introspection or interact in special ways with the
+Aurora client or Aurora-provided services.
+
+### mesos Namespace
+
+The `mesos` namespace contains the `instance` variable that can be used
+to distinguish between Task replicas.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>instance</code></td>
+      <td>Integer</td>
+      <td>The instance number of the created task. A job with 5 replicas has instance numbers 0, 1, 2, 3, and 4.</td>
+    </tr>
+  </tbody>
+</table>
+
+### thermos Namespace
+
+The `thermos` namespace contains variables that work directly on the
+Thermos platform in addition to Aurora. This namespace is fully
+compatible with Tasks invoked via the `thermos` CLI.
+
+<table border=1>
+  <tbody>
+    <tr>
+      <td><code>ports</code></td>
+      <td>map of string to Integer</td>
+      <td>A map of names to port numbers</td>
+    </tr>
+    <tr>
+      <td><code>task_id</code></td>
+      <td>string</td>
+      <td>The task ID assigned to this task.</td>
+    </tr>
+  </tbody>
+</table>
+
+The `thermos.ports` namespace is automatically populated by Aurora when
+invoking tasks on Mesos. When running the `thermos` command directly,
+these ports must be explicitly mapped with the `-P` option.
+
+For example, if '{{`thermos.ports[http]`}}' is specified in a `Process`
+configuration, it is automatically extracted and auto-populated by
+Aurora, but must be specified with, for example, `thermos -P http:12345`
+to map `http` to port 12345 when running via the CLI.
+
+Basic Examples
+==============
+
+These are provided to give a basic understanding of simple Aurora jobs.
+
+### hello_world.aurora
+
+Put the following in a file named `hello_world.aurora`, substituting your own values
+for values such as `cluster`s.
+
+    import os
+    hello_world_process = Process(name = 'hello_world', cmdline = 'echo hello world')
+
+    hello_world_task = Task(
+      resources = Resources(cpu = 0.1, ram = 16 * MB, disk = 16 * MB),
+      processes = [hello_world_process])
+
+    hello_world_job = Job(
+      cluster = 'cluster1',
+      role = os.getenv('USER'),
+      task = hello_world_task)
+
+    jobs = [hello_world_job]
+
+Then issue the following commands to create and kill the job, using your own values for the job key.
+
+    aurora create cluster1/$USER/test/hello_world hello_world.aurora
+
+    aurora kill cluster1/$USER/test/hello_world
+
+### Environment Tailoring
+
+#### hello_world_productionized.aurora
+
+Put the following in a file named `hello_world_productionized.aurora`, substituting your own values
+for values such as `cluster`s.
+
+    include('hello_world.aurora')
+
+    production_resources = Resources(cpu = 1.0, ram = 512 * MB, disk = 2 * GB)
+    staging_resources = Resources(cpu = 0.1, ram = 32 * MB, disk = 512 * MB)
+    hello_world_template = hello_world(
+        name = "hello_world-{{cluster}}"
+        task = hello_world(resources=production_resources))
+
+    jobs = [
+      # production jobs
+      hello_world_template(cluster = 'cluster1', instances = 25),
+      hello_world_template(cluster = 'cluster2', instances = 15),
+
+      # staging jobs
+      hello_world_template(
+        cluster = 'local',
+        instances = 1,
+        task = hello_world(resources=staging_resources)),
+    ]
+
+Then issue the following commands to create and kill the job, using your own values for the job key
+
+    aurora create cluster1/$USER/test/hello_world-cluster1 hello_world_productionized.aurora
+
+    aurora kill cluster1/$USER/test/hello_world-cluster1