You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mesos.apache.org by be...@apache.org on 2013/03/03 20:37:41 UTC

svn commit: r1452107 [1/2] - /incubator/mesos/trunk/docs/

Author: benh
Date: Sun Mar  3 19:37:40 2013
New Revision: 1452107

URL: http://svn.apache.org/r1452107
Log:
Added wiki from Github.

From: Adam Monsen <ha...@gmail.com>
Review: https://reviews.apache.org/r/9664

Added:
    incubator/mesos/trunk/docs/
    incubator/mesos/trunk/docs/90-day-plan.md
    incubator/mesos/trunk/docs/Allocation-module.textile
    incubator/mesos/trunk/docs/App-Framework-development-guide.textile
    incubator/mesos/trunk/docs/Configuration.textile
    incubator/mesos/trunk/docs/DRAFT:-Running-Hadoop-on-Mesos-UPDATED-INCOMPLETELY-12-13-11.md
    incubator/mesos/trunk/docs/Deploy-Scripts.textile
    incubator/mesos/trunk/docs/EC2-Scripts.textile
    incubator/mesos/trunk/docs/Event-history.md
    incubator/mesos/trunk/docs/Hadoop-demo-VM.md
    incubator/mesos/trunk/docs/Home.md
    incubator/mesos/trunk/docs/Logging-and-Debugging.textile
    incubator/mesos/trunk/docs/Mesos-Architecture.md
    incubator/mesos/trunk/docs/Mesos-Code-Internals.textile
    incubator/mesos/trunk/docs/Mesos-Roadmap.md
    incubator/mesos/trunk/docs/Mesos-c++-style-guide.md
    incubator/mesos/trunk/docs/Mesos-configure-command-flag-options.md
    incubator/mesos/trunk/docs/Mesos-developers-guide.md
    incubator/mesos/trunk/docs/Mesos-ready-to-go-AMI.md
    incubator/mesos/trunk/docs/Old-mesos-build-instructions.md
    incubator/mesos/trunk/docs/Powered-by-Mesos.md
    incubator/mesos/trunk/docs/Running-Hadoop-on-Mesos.md
    incubator/mesos/trunk/docs/Running-Mesos-On-Mac-OS-X-Snow-Leopard-(Single-Node-Cluster).md
    incubator/mesos/trunk/docs/Running-Mesos-on-Ubuntu-Linux-(Single-Node-Cluster).md
    incubator/mesos/trunk/docs/Running-a-Second-Instance-of-Hadoop-(Snow-Leopard).md
    incubator/mesos/trunk/docs/Running-a-web-application-farm-on-mesos.textile
    incubator/mesos/trunk/docs/Running-torque-or-mpi-on-mesos.md
    incubator/mesos/trunk/docs/Using-Linux-Containers.textile
    incubator/mesos/trunk/docs/Using-ZooKeeper.textile
    incubator/mesos/trunk/docs/Using-the-mesos-submit-tool.md

Added: incubator/mesos/trunk/docs/90-day-plan.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/90-day-plan.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/90-day-plan.md (added)
+++ incubator/mesos/trunk/docs/90-day-plan.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,162 @@
+# 90-day plan
+
+## At a glance
+
+1. Graduate from the Apache Incubator
+1. Add two independent committers
+1. Involve Mesos in the Google Summer of Code
+1. Automate creation of Mac OS X, .rpm, and .deb binary packages
+
+# Details
+
+This 90-day plan aims to bolster the Mesos community. It addresses "Growing a development community and promoting adoption" in the [[Mesos Roadmap]].
+
+Start: March 1st, 2013. End: May 30, 2013.
+
+## Graduate from the Apache Incubator
+
+* Status → incomplete.
+* Justification → Mesos must become a [sub]project or risk termination from Apache.
+* Who's working on this → (your name here)
+* Next steps → Identify people to work on this goal. Add committers. Call a vote.
+
+## Add two independent committers
+
+* Status → incomplete.
+* Justification → This is a prerequisite to Mesos graduating from the Apache incubator.
+* Who's working on this → (your name here).
+* Next steps → Identify people to work on this goal. See "Ideas on how to add independent committers" section, below.
+
+Independent is defined on the [Mesos incubation status page](http://incubator.apache.org/projects/mesos.html).
+
+## Involve Mesos in the Google Summer of Code
+
+* Status → incomplete.
+* Justification → This a battle-tested means of attracting new, able developers.
+* Who's working on this → (your name here).
+* Next steps → Identify people to work on this goal. Define projects. Apply (March 18-29).
+
+## Automate creation of Mac OS X, .rpm, and .deb binary packages
+
+* Status → incomplete.
+* Justification → Binary packages greatly increase OSS project approachability.
+* Who's working on this → (your name here).
+* Next steps → Identify people to work on this goal.
+
+# Acceptance of plan
+
+* Chris Mattmann: (no reply yet).
+* Brian McCallister: (no reply yet).
+* Tom White: (no reply yet).
+* Ali Ghodsi: (no reply yet).
+* Benjamin Hindman: (no reply yet).
+* Andy Konwinski: (no reply yet).
+* Matei Zaharia: (no reply yet).
+
+# Brainstorm / extra material
+
+## Assumptions
+
+Compared to some Open Source projects (the Linux kernel, the Ruby programming
+language), Mesos is niche software.
+
+## Ideas on how to add independent committers
+
+* actively seek out new contributors
+* make sure potential contributors know about Mesos
+* who are potential contributors?
+  * current users that can write documentation or code
+  * anyone who manages clusters
+* how do we reach these potential contributors?
+  * publish scientific articles and whitepapers on Mesos
+  * talk about Mesos at conferences
+  * publish *non*-scientific articles on Mesos
+* reward contributors
+  * specific thanks / recognition
+* spread the word
+  * blog, tweet, etc.
+  * write up regular (monthly or weekly) updates ([example from the Symfony2 community](http://symfony.com/blog/a-week-of-symfony-317-21-27-january-2013), note that commit summaries are useful, note "They talked about us" section)
+  * apply for relevant competitions / awards
+  * participate in the Google Summer of Code
+  * get on [FLOSS Weekly](http://twit.tv/show/floss-weekly)
+  * do more meetups (mentioned in [January 2013 podling report](http://wiki.apache.org/incubator/January2013))
+* highlight contributions
+  * [example from Symfony2 community](http://symfony.com/blog/new-in-symfony-2-2-autocomplete-on-the-command-line) - contributors can write these themselves!
+* organize coding / doc / testing sprints
+
+## Ideas on how to make Mesos more approachable
+
+* What does "approachable" mean?
+  * easy to use (see "Regularly build OS-specific binary packages" goal, below)
+  * easy to develop
+* Visualize approaching Mesos as a potential contributor
+  * What would inspire you to try Mesos?
+  * What would inspire you to contribute?
+* What do you look for when you approach Open Source Software?
+* improve documentation
+  * cull old/outdated [wiki] pages
+* screencasts/demos (especially of most exciting features)
+* curate many bite-size tasks for contributors
+  * refactoring
+  * code cleanup
+  * testing
+  * documentation
+  * DONE. [Vinod Kone suggests](http://mail-archives.apache.org/mod_mbox/incubator-mesos-dev/201301.mbox/%3CCAAkWvAyo6uEu76%3DPjn2ZePOy7ZG4ksHz_AG%3D9P44M2t%2BnOka6A%40mail.gmail.com%3E) perusing [issues with Minor or Trivial priority](https://issues.apache.org/jira/browse/MESOS#selectedTab=com.atlassian.jira.plugin.system.project%3Aissues-panel)
+* Idea: use a survey to measure approachability of Mesos
+  * how approachable is Mesos compared to other OSS?
+  * what would make Mesos more approachable to you?
+* mirror git repo to github ([DONE](https://github.com/apache/mesos))
+* mirror mesos-dev mailing list to gmane
+* add screenshots
+* start a Mesos IRC channel
+* [find a volunteer to] create a logo (see [MESOS-337](https://issues.apache.org/jira/browse/MESOS-337))
+* separate automated/notification emails (from review board, jenkins, etc.) from hand-typed email discussions
+* announce supported compiler version(s), and *make sure Mesos compiles with these* (ideally with continuous integration)
+* announce supported runtime hardware/configuration(s), and *make sure Mesos runs on these* (ideally with continuous integration)
+* reduce sources of truth
+  * http://www.mesosproject.org and http://incubator.apache.org/mesos/ look identical, just make one redirect to the other.
+
+## Other ideas
+
+* do more releases (mentioned in [January 2013 podling report](http://wiki.apache.org/incubator/January2013))
+* Make Mesos compile with gcc-4.7 (see [MESOS-271](https://issues.apache.org/jira/browse/MESOS-271))
+* Improve Mesos evangelization
+  * add more to https://github.com/mesos/mesos/wiki/Powered-by-Mesos (AirBnB?)
+* get Hadoop customizations upstream
+* add all goals in the 90-day plan to JIRA, track them there instead of here
+* trademark the name Mesos
+
+## OSS marketing
+
+One aspect of having a successful Open Source project is marketing. To address
+this, we can Test, Measure, Act, and Repeat. For example:
+
+* Test → [apply to] participate in the Google Summer of Code (hereafter GSoC)
+* Measure → track number of contributors gained
+* Act → Gained 2 contributors? Participate in other programs similar to GSoC, host a program like GSoC, plan to participate in GSoC next year.
+* Repeat → GOTO Test
+
+## See also
+
+* [The Art of Community](http://www.artofcommunityonline.org/) by Jono Bacon
+* [Producing Open Source Software](http://producingoss.com/) by Karl Fogel
+* [announcement of this plan](http://mail-archives.apache.org/mod_mbox/incubator-mesos-dev/201302.mbox/%3C511ADF8E.5080700%40gmail.com%3E)
+
+# Conventions for this page
+
+To facilitate future changes to this document (and discussion of same), please
+observe the following conventions:
+
+* [Markdown syntax](daringfireball.net/projects/markdown/syntax).
+* Try to Wrap lines at 80-characters (exceptions: literal code examples or long
+  strings like URLs).
+* Use spaces only (no tabs).
+
+Like the meritocracy of Apache, the best ideas win in this document. Anything
+without a clear direction or consensus should be discussed and decided on the
+mailing list.
+
+## Writing Goals
+
+Goals are, ideally, SMART (specific, measurable, actionable, realistic, and
+timely). Excuse the corny acronym, it's just a useful mneumonic.
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Allocation-module.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Allocation-module.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Allocation-module.textile (added)
+++ incubator/mesos/trunk/docs/Allocation-module.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,33 @@
+h1. Overview
+
+The logic that the Mesos master uses to determine which frameworks to make offer resource offers to is encapsulated in the Master's _allocation module_.  The allocation module is a pluggable component that organizations can use to implement their own sharing policy, e.g. fair-sharing, Dominant Resource Fairness (see [[the DRF paper|http://www.eecs.berkeley.edu/Pubs/TechRpts/2010/EECS-2010-55.pdf]]), priority, etc.
+
+h1. Allocation Module API
+
+Mesos is implemented in C++, so allocation modules are implemented in C++, and inherit the @Allocator@ class defined in @MESOS_HOME/src/master/allocator.hpp@. As of the time of this writing (11/12/10), the API for allocation modules is as follows:
+
+```c++
+  virtual ~Allocator() {}
+
+  virtual void frameworkAdded(Framework *framework) {}
+
+  virtual void frameworkRemoved(Framework *framework) {}
+
+  virtual void slaveAdded(Slave *slave) {}
+
+  virtual void slaveRemoved(Slave *slave) {}
+
+  virtual void taskAdded(Task *task) {}
+
+  virtual void taskRemoved(Task *task, TaskRemovalReason reason) {}
+
+  virtual void offerReturned(SlotOffer* offer,
+                             OfferReturnReason reason,
+                             const std::vector<SlaveResources>& resourcesLeft) {}
+
+  virtual void offersRevived(Framework *framework) {}
+
+  virtual void timerTick() {}};
+```
+
+The default allocation module is the SimpleAllocator, which can be found in @MESOS_HOME/src/master/simple_allocator.cpp@ and @...hpp@. You an reference these as a starting place if you choose to write your own allocation module.

Added: incubator/mesos/trunk/docs/App-Framework-development-guide.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/App-Framework-development-guide.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/App-Framework-development-guide.textile (added)
+++ incubator/mesos/trunk/docs/App-Framework-development-guide.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,202 @@
+h1. App/Framework development guide
+
+<b>Note:</b> In this document (and also in the Mesos code base, at least as of Feb 14, 2012), we refer to Mesos Applications as "Frameworks".
+
+See one of the example framework schedulers in @MESOS_HOME/src/examples/@ to get an idea of what a Mesos framework scheduler and executor in the language of your choice looks like.
+
+h2. Create your Framework Scheduler
+
+You can write a framework scheduler in C, C++, Java/Scala, or Python. Your framework scheduler should inherit from the @Scheduler@ class (see API below). Your scheduler should create a SchedulerDriver (which will mediate communication between your scheduler and the Mesos master) and then call @SchedulerDriver.run()@
+
+h3. Scheduler API (as of 2013/02/01) declared in @MESOS_HOME/include/mesos/scheduler.hpp@
+
+```c++
+  /**
+   * Empty virtual destructor (necessary to instantiate subclasses).
+   */
+  virtual ~Scheduler() {}
+
+  /**
+   * Invoked when the scheduler successfully registers with a Mesos
+   * master. A unique ID (generated by the master) used for
+   * distinguishing this framework from others and MasterInfo
+   * with the ip and port of the current master are provided as arguments.
+   */
+  virtual void registered(SchedulerDriver* driver,
+                          const FrameworkID& frameworkId,
+                          const MasterInfo& masterInfo) = 0;
+
+  /**
+   * Invoked when the scheduler re-registers with a newly elected Mesos master.
+   * This is only called when the scheduler has previously been registered.
+   * MasterInfo containing the updated information about the elected master
+   * is provided as an argument.
+   */
+  virtual void reregistered(SchedulerDriver* driver,
+                            const MasterInfo& masterInfo) = 0;
+
+  /**
+   * Invoked when the scheduler becomes "disconnected" from the master
+   * (e.g., the master fails and another is taking over).
+   */
+  virtual void disconnected(SchedulerDriver* driver) = 0;
+
+  /**
+   * Invoked when resources have been offered to this framework. A
+   * single offer will only contain resources from a single slave.
+   * Resources associated with an offer will not be re-offered to
+   * _this_ framework until either (a) this framework has rejected
+   * those resources (see SchedulerDriver::launchTasks) or (b) those
+   * resources have been rescinded (see Scheduler::offerRescinded).
+   * Note that resources may be concurrently offered to more than one
+   * framework at a time (depending on the allocator being used). In
+   * that case, the first framework to launch tasks using those
+   * resources will be able to use them while the other frameworks
+   * will have those resources rescinded (or if a framework has
+   * already launched tasks with those resources then those tasks will
+   * fail with a TASK_LOST status and a message saying as much).
+   */
+  virtual void resourceOffers(SchedulerDriver* driver,
+                              const std::vector<Offer>& offers) = 0;
+
+  /**
+   * Invoked when an offer is no longer valid (e.g., the slave was
+   * lost or another framework used resources in the offer). If for
+   * whatever reason an offer is never rescinded (e.g., dropped
+   * message, failing over framework, etc.), a framwork that attempts
+   * to launch tasks using an invalid offer will receive TASK_LOST
+   * status updats for those tasks (see Scheduler::resourceOffers).
+   */
+  virtual void offerRescinded(SchedulerDriver* driver,
+                              const OfferID& offerId) = 0;
+
+  /**
+   * Invoked when the status of a task has changed (e.g., a slave is
+   * lost and so the task is lost, a task finishes and an executor
+   * sends a status update saying so, etc). Note that returning from
+   * this callback _acknowledges_ receipt of this status update! If
+   * for whatever reason the scheduler aborts during this callback (or
+   * the process exits) another status update will be delivered (note,
+   * however, that this is currently not true if the slave sending the
+   * status update is lost/fails during that time).
+   */
+  virtual void statusUpdate(SchedulerDriver* driver,
+                            const TaskStatus& status) = 0;
+
+  /**
+   * Invoked when an executor sends a message. These messages are best
+   * effort; do not expect a framework message to be retransmitted in
+   * any reliable fashion.
+   */
+  virtual void frameworkMessage(SchedulerDriver* driver,
+                                const ExecutorID& executorId,
+                                const SlaveID& slaveId,
+                                const std::string& data) = 0;
+
+  /**
+   * Invoked when a slave has been determined unreachable (e.g.,
+   * machine failure, network partition). Most frameworks will need to
+   * reschedule any tasks launched on this slave on a new slave.
+   */
+  virtual void slaveLost(SchedulerDriver* driver,
+                         const SlaveID& slaveId) = 0;
+
+  /**
+   * Invoked when an executor has exited/terminated. Note that any
+   * tasks running will have TASK_LOST status updates automagically
+   * generated.
+   */
+  virtual void executorLost(SchedulerDriver* driver,
+                            const ExecutorID& executorId,
+                            const SlaveID& slaveId,
+                            int status) = 0;
+
+  /**
+   * Invoked when there is an unrecoverable error in the scheduler or
+   * scheduler driver. The driver will be aborted BEFORE invoking this
+   * callback.
+   */
+  virtual void error(SchedulerDriver* driver, const std::string& message) = 0;
+```
+
+h2. Create your Framework Executor
+
+Your framework executor must inherit from the Executor class. It must override the launchTask() method. 
+You can use the $MESOS_HOME environment variable inside of your executor to determine where mesos is running from.
+
+h3. Executor API (as of 2013/02/01) declared in @MESOS_HOME/include/mesos/executor.hpp@
+
+```c++
+  /**
+   * Invoked once the executor driver has been able to successfully
+   * connect with Mesos. In particular, a scheduler can pass some
+   * data to it's executors through the FrameworkInfo.ExecutorInfo's
+   * data field.
+   */
+  virtual void registered(ExecutorDriver* driver,
+                          const ExecutorInfo& executorInfo,
+                          const FrameworkInfo& frameworkInfo,
+                          const SlaveInfo& slaveInfo) = 0;
+
+  /**
+   * Invoked when the executor re-registers with a restarted slave.
+   */
+  virtual void reregistered(ExecutorDriver* driver,
+                            const SlaveInfo& slaveInfo) = 0;
+
+  /**
+   * Invoked when the executor becomes "disconnected" from the slave
+   * (e.g., the slave is being restarted due to an upgrade).
+   */
+  virtual void disconnected(ExecutorDriver* driver) = 0;
+
+  /**
+   * Invoked when a task has been launched on this executor (initiated
+   * via Scheduler::launchTasks). Note that this task can be realized
+   * with a thread, a process, or some simple computation, however, no
+   * other callbacks will be invoked on this executor until this
+   * callback has returned.
+   */
+  virtual void launchTask(ExecutorDriver* driver,
+                          const TaskInfo& task) = 0;
+
+  /**
+   * Invoked when a task running within this executor has been killed
+   * (via SchedulerDriver::killTask). Note that no status update will
+   * be sent on behalf of the executor, the executor is responsible
+   * for creating a new TaskStatus (i.e., with TASK_KILLED) and
+   * invoking ExecutorDriver::sendStatusUpdate.
+   */
+  virtual void killTask(ExecutorDriver* driver, const TaskID& taskId) = 0;
+
+  /**
+   * Invoked when a framework message has arrived for this
+   * executor. These messages are best effort; do not expect a
+   * framework message to be retransmitted in any reliable fashion.
+   */
+    virtual void frameworkMessage(ExecutorDriver* driver,
+                                  const std::string& data) = 0;
+
+  /**
+   * Invoked when the executor should terminate all of it's currently
+   * running tasks. Note that after a Mesos has determined that an
+   * executor has terminated any tasks that the executor did not send
+   * terminal status updates for (e.g., TASK_KILLED, TASK_FINISHED,
+   * TASK_FAILED, etc) a TASK_LOST status update will be created.
+   */
+  virtual void shutdown(ExecutorDriver* driver) = 0;
+
+  /**
+   * Invoked when a fatal error has occured with the executor and/or
+   * executor driver. The driver will be aborted BEFORE invoking this
+   * callback.
+   */
+  virtual void error(ExecutorDriver* driver, const std::string& message) = 0;
+```
+
+
+h2. Install your Framework
+
+You need to put your framework somewhere that all slaves on the cluster can get it from. If you are running HDFS, you can put your executor into HDFS. Then, you tell Mesos where it is via the @ExecutorInfo@ parameter of @MesosSchedulerDriver@'s constructor (e.g. see src/examples/java/TestFramework.java for an example of this). ExecutorInfo is a a Protocol Buffer Message class (defined in @include/mesos/mesos.proto@), and you set its uri field to something like "HDFS://path/to/executor/". Also, you can pass the @frameworks_home@ configuration option (defaults to: @MESOS_HOME/frameworks@) to your @mesos-slave@ daemons when you launch them to specify where all of your framework executors are stored (e.g. on an NFS mount that is available to all slaves), then set @ExecutorInfo@ to be a relative path, and the slave will prepend the value of frameworks_home to the relative path provided.
+
+Once you are sure that your executors are available to the mesos-slaves, you should be able to run your scheduler, which will register with the Mesos master, and start receiving resource offers!
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Configuration.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Configuration.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Configuration.textile (added)
+++ incubator/mesos/trunk/docs/Configuration.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,39 @@
+The Mesos master and slave can take a variety of configuration options through command-line arguments, environment variables, or a config file. A list of the available options can be seen by running @mesos-master --help@ or @mesos-slave --help@. Each option can be set in three ways:
+* By passing it to the binary using @--option_name=value@.
+* By setting the environment variable @MESOS_OPTION_NAME@ (the option name with a @MESOS_@ prefix added to it).
+* By adding a line in the @mesos.conf@ file located in @MESOS_HOME/conf@, where @MESOS_HOME@ is the directory in which Mesos is located.
+
+Configuration values are searched for first on the command line, then in the environment, and then in the config file.
+
+In addition, it is possible to tell the binaries to use a different configuration directory from the default, in order to run multiple Mesos configurations from the same install directories. This is done by passing the @--conf=DIR@ option to the binary. The directory passed should contain a file called @mesos.conf@. A config directory is used instead of passing a file directly because, in the future, there may be multiple config files in the same directory (e.g. a separate one for the allocation module).
+
+h1. Common Options
+
+The following list may not match the version of mesos you are running, as the configuration flags evolve from time to time in the code. 
+
+<div style="font-size:1.5em"><b>The definitive source for which flags your version of Mesos supports can be found by running the binary with the flag <code>--help</code>, for example <code>bin/mesos-master --help</code>.</b></div>
+
+The following options are commonly configured for both the master and slave:
+
+|_. Parameter         |_. Description|
+|@webui_port@     | Port to bind to for user-viewable web UI. |
+|@log_dir@             | Directory to place logs into, including [[event history]] logs. It is recommended to use a local disk for each node. |
+|@quiet@                | Disable logging to standard error. |
+|@conf@                | Specifies a config directory to use instead of the default one. |
+
+The following options are commonly configured for the slave:
+
+|_. Parameter         |_. Description|
+|@master@                     | Master URL to connect to. |
+|@resources@                 | A semicolon separated set of _key:value_ pairs, e.g. <code>cpus:1;mem:1024</code> |
+|@work_dir@         | Directory for slaves to place framework's output files in. It is recommended to use a local disk for each slave. |
+|@isolation@            | Isolation module to use for isolating tasks. The default is @process@, which performs no isolation other than running each framework as its user, but @lxc@ can be used on systems that support Linux Containers. See [[using Linux containers]] for details. |
+|@hadoop_home@ | Location of Hadoop, if you wish to use the Hadoop Distributed File System for distributing framework binaries across the cluster. |
+
+The following options are commonly configured for the master:
+
+|_. Parameter         |_. Description|
+|@url@                     | A ZooKeeper URL can be specified when [[using ZooKeeper]]. |
+|@port@                 | Port to bind to for communication between master and slaves. |
+|@allocator@           | Which pluggable [[allocation module]] to use. Currently there is only one, @simple@, that performs fair sharing between frameworks. |
+|@ip@                      | Which IP address to bind to (to make the master bind to a desired network interface if it does not do so by default). |

Added: incubator/mesos/trunk/docs/DRAFT:-Running-Hadoop-on-Mesos-UPDATED-INCOMPLETELY-12-13-11.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/DRAFT%3A-Running-Hadoop-on-Mesos-UPDATED-INCOMPLETELY-12-13-11.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/DRAFT:-Running-Hadoop-on-Mesos-UPDATED-INCOMPLETELY-12-13-11.md (added)
+++ incubator/mesos/trunk/docs/DRAFT:-Running-Hadoop-on-Mesos-UPDATED-INCOMPLETELY-12-13-11.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,207 @@
+We have ported version 0.20.2 of Hadoop to run on Mesos. Most of the Mesos port is implemented by a pluggable Hadoop scheduler, which communicates with Mesos to receive nodes to launch tasks on. However, a few small additions to Hadoop's internal APIs are also required.
+
+The ported version of Hadoop is included in the Mesos project under `frameworks/hadoop-0.20.2`. However, if you want to patch your own version of Hadoop to add Mesos support, you can also use the patch located at `frameworks/hadoop-0.20.2/hadoop-mesos.patch`. This patch should apply on any 0.20.* version of Hadoop, and is also likely to work on Hadoop distributions derived from 0.20, such as Cloudera's or Yahoo!'s.
+
+To run Hadoop on Mesos, follow these steps:
+<ol>
+<li> Build Hadoop using <code>ant</code>.</li>
+<li> Set up [[Hadoop's configuration|http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/]] as you would usually do with a new install of Hadoop, following the [[instructions on the Hadoop website|http://hadoop.apache.org/common/docs/r0.20.2/index.html]] (at the very least, you need to set <code>JAVA_HOME</code> in Hadoop's <code>conf/hadoop-env.sh</code> and set <code>mapred.job.tracker</code> in <code>conf/mapred-site.xml</code>).</li>
+</li>
+<li> Add the following parameters to Hadoop's <code>conf/mapred-site.xml</code>:
+<pre>
+&lt;property&gt;
+  &lt;name&gt;mapred.jobtracker.taskScheduler&lt;/name&gt;
+  &lt;value&gt;org.apache.hadoop.mapred.MesosScheduler&lt;/value&gt;
+&lt;/property&gt;
+&lt;property&gt;
+  &lt;name&gt;mapred.mesos.master&lt;/name&gt;
+  &lt;value&gt;[URL of Mesos master]&lt;/value&gt;
+&lt;/property&gt;
+</pre>
+</li>
+<li> Launch a JobTracker with <code>bin/hadoop jobtracker</code> (<i>do not</i> use <code>bin/start-mapred.sh</code>). The JobTracker will then launch TaskTrackers on Mesos when jobs are submitted.</li>
+<li> Submit jobs to your JobTracker as usual.</li>
+</ol>
+
+Note that when you run on a cluster, Hadoop should be installed at the same path on all nodes, and so should Mesos.
+
+## Running a version of Hadoop other than the one included with Mesos
+If you run your own version of Hadoop instead of the one included in Mesos, you will first need to patch it, and copy the Hadoop Scheduler specific to Mesos over. You will then need to take an additional step before building and running Hadoop: you must set the `MESOS_HOME` environment variable to the location where Mesos is found. You need to do this both in your shell environment when you run `ant`, and in Hadoop's `hadoop-env.sh`.
+
+## Running Multiple Hadoops
+If you wish to run multiple JobTrackers (e.g. different versions of Hadoop), the easiest way is to give each one a different port by using a different Hadoop `conf` directory for each one and passing the `--conf` flag to `bin/hadoop` to specify which config directory to use. You can copy Hadoop's existing `conf` directory to a new location and modify it to achieve this.
+
+------------
+1. Setting up the environment:  
+    - Create/Edit ~/.bashrc file.  
+    `` ~$  vi ~/.bashrc ``  
+    add the following:  
+```
+    # Set Hadoop-related environment variables. Here the username is billz
+    export HADOOP_HOME=/Users/billz/mesos/frameworks/hadoop-0.20.2/  
+
+    # Add Hadoop bin/ directory to PATH  
+    export PATH=$PATH:$HADOOP_HOME/bin  
+
+    # Set where you installed the mesos. For me is /Users/billz/mesos. billz is my username.  
+    export MESOS_HOME=/Users/billz/mesos/
+```
+    - Go to hadoop directory that come with mesos's directory:  
+    `cd ~/mesos/frameworks/hadoop-0.20.2/conf`  
+    - Edit **hadoop-env.sh** file.  
+    add the following:  
+```
+# The java implementation to use.  Required.
+export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home
+
+# Mesos uses this.
+export MESOS_HOME=/Users/username/mesos/
+
+# Extra Java runtime options.  Empty by default. This disables IPv6.
+export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
+```
+
+2. Hadoop configuration:  
+    - In **core-site.xml** file add the following:  
+```
+<!-- In: conf/core-site.xml -->
+<property>
+  <name>hadoop.tmp.dir</name>
+  <value>/app/hadoop/tmp</value>
+  <description>A base for other temporary directories.</description>
+</property>
+
+<property>
+  <name>fs.default.name</name>
+  <value>hdfs://localhost:54310</value>
+  <description>The name of the default file system.  A URI whose
+  scheme and authority determine the FileSystem implementation.  The
+  uri's scheme determines the config property (fs.SCHEME.impl) naming
+  the FileSystem implementation class.  The uri's authority is used to
+  determine the host, port, etc. for a filesystem.</description>
+</property>
+```
+    - In **hdfs-site.xml** file add the following:  
+```
+<!-- In: conf/hdfs-site.xml -->
+<property>
+  <name>dfs.replication</name>
+  <value>1</value>
+  <description>Default block replication.
+  The actual number of replications can be specified when the file is created.
+  The default is used if replication is not specified in create time.
+  </description>
+</property>
+```
+    - In **mapred-site.xml** file add the following:  
+```
+<!-- In: conf/mapred-site.xml -->
+<property>
+  <name>mapred.job.tracker</name>
+  <value>localhost:9001</value>
+  <description>The host and port that the MapReduce job tracker runs
+  at.  If "local", then jobs are run in-process as a single map
+  and reduce task.
+  </description>
+</property>
+
+<property>
+  <name>mapred.jobtracker.taskScheduler</name>
+  <value>org.apache.hadoop.mapred.MesosScheduler</value>
+</property>
+<property>
+  <name>mapred.mesos.master</name>
+  <value>mesos://master@10.1.1.1:5050</value> <!-- Here we are assuming your host IP address is 10.1.1.1 -->
+</property>
+
+```
+  
+3. Build the Hadoop-0.20.2 that come with Mesos
+    - Go to hadoop-0.20.2 directory:  
+    ` ~$ cd ~/mesos/frameworks/hadoop-0.20.2`  
+    - Build Hadoop:  
+    ` ~$ ant `  
+    - Build the Hadoop's Jar files:  
+    ` ~$ ant compile-core jar`  
+    ` ~$ ant examples jar`
+
+4. Setup Hadoop’s Distributed File System **HDFS**:  
+    - create the directory and set the required ownerships and permissions: 
+```
+$ sudo mkdir /app/hadoop/tmp
+$ sudo chown billz:billz /app/hadoop/tmp
+# ...and if you want to tighten up security, chmod from 755 to 750...
+$ sudo chmod 750 /app/hadoop/tmp
+```
+    - formatting the Hadoop filesystem:  
+    `~/mesos/frameworks/hadoop-0.20.2$  bin/hadoop namenode -format`
+
+5. Copy local example data to HDFS  
+    - Download some plain text document from Project Gutenberg  
+    [The Notebooks of Leonardo Da Vinci](http://www.gutenberg.org/ebooks/5000)  
+    [Ulysses by James Joyce](http://www.gutenberg.org/ebooks/4300)  
+    [The Complete Works of William Shakespeare](http://www.gutenberg.org/ebooks/100)  
+    save these to /tmp/gutenberg/ directory.  
+    - Copy files from our local file system to Hadoop’s HDFS  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg ~/gutenberg`  
+    - Check the file(s) in Hadoop's HDFS  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls ~/gutenberg`       
+    - You should see something like the following:
+```
+ ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls ~/gutenberg
+Found 6 items
+-rw-r--r--   1 billz supergroup    5582656 2011-07-14 16:38 /user/billz/gutenberg/pg100.txt
+-rw-r--r--   1 billz supergroup    3322643 2011-07-14 16:38 /user/billz/gutenberg/pg135.txt
+-rw-r--r--   1 billz supergroup    1884720 2011-07-14 16:38 /user/billz/gutenberg/pg14833.txt
+-rw-r--r--   1 billz supergroup    2130906 2011-07-14 16:38 /user/billz/gutenberg/pg18997.txt
+-rw-r--r--   1 billz supergroup    3288707 2011-07-14 16:38 /user/billz/gutenberg/pg2600.txt
+-rw-r--r--   1 billz supergroup    1423801 2011-07-14 16:38 /user/billz/gutenberg/pg5000.txt
+
+```
+
+6. Start all your frameworks!
+    - Start Mesos's Master:      
+    ` ~/mesos$ bin/mesos-master &`  
+    - Start Mesos's Slave:       
+    ` ~/mesos$ bin/mesos-slave --master=mesos://master@localhost:5050 &`  
+    - Start Hadoop's namenode:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop namenode &`  
+    - Start Hadoop's datanode:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop datanode &`  
+    - Start Hadoop's jobtracker:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jobtracker &`  
+
+7. Run the MapReduce job:  
+   We will now run your first Hadoop MapReduce job. We will use the [WordCount](http://wiki.apache.org/hadoop/WordCount) example job which reads text files and counts how often words occur. The input is text files and the output is text files, each line of which contains a word and the count of how often it occurred, separated by a tab.  
+
+    - Run the "wordcount" example MapReduce job:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jar build/hadoop-0.20.3-dev-examples.jar wordcount ~/gutenberg ~/output`  
+    - You will see something like the following:  
+```
+11/07/19 15:34:29 INFO input.FileInputFormat: Total input paths to process : 6
+11/07/19 15:34:29 INFO mapred.JobClient: Running job: job_201107191533_0001
+11/07/19 15:34:30 INFO mapred.JobClient:  map 0% reduce 0%
+11/07/19 15:34:43 INFO mapred.JobClient:  map 16% reduce 0%
+
+[ ... trimmed ... ]
+```
+
+8. Web UI for Hadoop and Mesos:   
+    - [http://localhost:50030](http://localhost:50030) - web UI for MapReduce job tracker(s)  
+    - [http://localhost:50060](http://localhost:50060) - web UI for task tracker(s)  
+    - [http://localhost:50070](http://localhost:50070) - web UI for HDFS name node(s)  
+    - [http://localhost:8080](http://localhost:8080) - web UI for Mesos master  
+
+
+9. Retrieve the job result from HDFS:
+   - list the HDFS directory:
+```
+~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls ~/gutenberg
+Found 2 items
+drwxr-xr-x   - billz supergroup          0 2011-07-14 16:38 /user/billz/gutenberg
+drwxr-xr-x   - billz supergroup          0 2011-07-19 15:35 /user/billz/output
+```
+   - View the output file:  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -cat ~/output/part-r-00000`
+
+### Congratulation! You have Hadoop running on Mesos!

Added: incubator/mesos/trunk/docs/Deploy-Scripts.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Deploy-Scripts.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Deploy-Scripts.textile (added)
+++ incubator/mesos/trunk/docs/Deploy-Scripts.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,9 @@
+Mesos includes a set of scripts in @MESOS_HOME/deploy@ that can be used to deploy it on a cluster. To use these scripts, you need to create two configuration files: @MESOS_HOME/conf/masters@, which should list the hostname(s) of the node(s) you want to be your masters (one per line), and @MESOS_HOME/conf/slaves@, which should contain a list of hostnames for your slaves. You can then start a cluster with @deploy/start-mesos@ and stop it with @deploy/stop-mesos@.
+
+It is also possible to set environment variables, ulimits, etc that will affect the master and slave. by editing @MESOS_HOME/deploy/mesos-env.sh@. One particularly useful setting is @LIBPROCESS_IP@, which tells the master and slave binaries which IP address to bind to; in some installations, the default interface that the hostname resolves to is not the machine's external IP address, so you can set the right IP through this variable.
+
+Finally, the deploy scripts do not use ZooKeeper by default. If you want to use ZooKeeper (for multiple masters), you can do so by either editing @deploy/mesos-env.sh@ to set @MESOS_URL@ to a @zoo://@ or @zoofile://@ URL, or by editing @conf/mesos.conf@ to set the @url@ configuration parameter. Please see [[Using ZooKeeper]] for details.
+
+h1. Notes
+* The deploy scripts assume that Mesos is located in the same directory on all nodes.
+* If you want to enable multiple Unix users to submit to the same cluster, you need to run the Mesos slaves as root (or possibly set the right attributes on the @mesos-slave@ binary). Otherwise, they will fail to @setuid@.
\ No newline at end of file

Added: incubator/mesos/trunk/docs/EC2-Scripts.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/EC2-Scripts.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/EC2-Scripts.textile (added)
+++ incubator/mesos/trunk/docs/EC2-Scripts.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,52 @@
+The @mesos-ec2@ script located in the Mesos's @ec2@ directory allows you to launch, manage and shut down Mesos clusters on Amazon EC2. You don't need to build Mesos to use this script -- you just need Python 2.6+ installed.
+
+@mesos-ec2@ is designed to manage multiple named clusters. You can launch a new cluster (telling the script its size and giving it a name), shutdown an existing cluster, or log into a cluster. Each cluster is identified by placing its machines into EC2 security groups whose names are derived from the name of the cluster. For example, a cluster named @test@ will contain a master node in a security group called @test-master@, and a number of slave nodes in a security group called @test-slaves@. The @mesos-ec2@ script will create these security groups for you based on the cluster name you request. You can also use them to identify machines belonging to each cluster in the EC2 Console or ElasticFox.
+
+This guide describes how to get set up to run clusters, how to launch clusters, how to run jobs on them, and how to shut them down.
+
+h1. Before You Start
+
+* Create an Amazon EC2 key pair for yourself. This can be done by logging into your Amazon Web Services account through the "AWS console":http://aws.amazon.com/console/, clicking Key Pairs on the left sidebar, and creating and downloading a key. Make sure that you set the permissions for the private key file to @600@ (i.e. only you can read and write it) so that @ssh@ will work.
+* Whenever you want to use the @mesos-ec2@ script, set the environment variables @AWS_ACCESS_KEY_ID@ and @AWS_SECRET_ACCESS_KEY@ to your Amazon EC2 access key ID and secret access key. These can be obtained from the "AWS homepage":http://aws.amazon.com/ by clicking Account > Security Credentials > Access Credentials.
+
+h1. Launching a Cluster
+
+* Go into the @ec2@ directory in the release of Mesos you downloaded.
+* Run @./mesos-ec2 -k <keypair> -i <key-file> -s <num-slaves> launch <cluster-name>@, where @<keypair>@ is the name of your EC2 key pair (that you gave it when you created it), @<key-file>@ is the private key file for your key pair, @<num-slaves>@ is the number of slave nodes to launch (try 1 at first), and @<cluster-name>@ is the name to give to your cluster.
+* After everything launches, check that Mesos is up and sees all the slaves by going to the Mesos Web UI link printed at the end of the script (@http://<master-hostname>:8080@).
+
+You can also run @./mesos-ec2 --help@ to see more usage options. The following options are worth pointing out:
+* @--instance-type=<INSTANCE_TYPE>@ can be used to specify an EC2 instance type to use. For now, the script only supports 64-bit instance types, and the default type is @m1.large@ (which has 2 cores and 7.5 GB RAM). Refer to the Amazon pages about "EC2 instance types":http://aws.amazon.com/ec2/instance-types and "EC2 pricing":http://aws.amazon.com/ec2/#pricing for information about other instance types. 
+* @--zone=<EC2_ZONE>@ can be used to specify an EC2 availability zone to launch instances in. Sometimes, you will get an error because there is not enough capacity in one zone, and you should try to launch in another. This happens mostly with the @m1.large@ instance types; extra-large (both @m1.xlarge@ and @c1.xlarge@) instances tend to be more available.
+* @--download=git@ will tell the instances to download the latest release of Mesos from the github git repository.
+* @--ft=<NUM_MASTERS>@ can be used to run Mesos in fault tolerant (FT) mode by specifying @NUM_MASTERS@ > 1.
+* If one of your launches fails due to e.g. not having the right permissions on your private key file, you can run @launch@ with the @--resume@ option to restart the setup process on an existing cluster.
+
+h1. Running Jobs
+
+* Go into the @ec2@ directory in the release of Mesos you downloaded.
+* Run @./mesos-ec2 -k <keypair> -i <key-file> login <cluster-name>@ to SSH into the cluster, where @<keypair>@ and @<key-file>@ are as above. (This is just for convenience; you could also use Elasticfox or the EC2 console.)
+* Copy your code to all the nodes. To do this, you can use the provided script @~/mesos-ec2/copy-dir@, which, given a directory path, RSYNCs it to the same location on all the slaves.
+* If your job needs to access large datasets, the fastest way to do that is to load them from Amazon S3 or an Amazon EBS device into an instance of the Hadoop Distributed File System (HDFS) on your nodes. The @mesos-ec2@ script already sets up a HDFS instance for you. It's installed in @/root/ephemeral-hdfs@, and can be accessed using the @bin/hadoop@ script in that directory. Note that the data in this HDFS goes away when you stop and restart a machine.
+* There is also a _persistent HDFS_ instance in @/root/presistent-hdfs@ that will keep data across cluster restarts. Typically each node has relatively little space of persistent data (about 3 GB), but you can use the @--ebs-vol-size@ option to @mesos-ec2@ to attach a persistent EBS volume to each node for storing the persistent HDFS.
+
+If you get an "Executor on slave X disconnected" error when running your framework, you probably haven't copied your code the slaves. Use the @~/mesos-ec2/copy-dir@ script to do that. If you keep getting the error, though, look at the slave's logs for that framework using the Mesos web UI. Please see [[logging and debugging]] for details.
+
+h1. Terminating a Cluster
+
+_*Note that there is no way to recover data on EC2 nodes after shutting them down! Make sure you have copied everything important off the nodes before stopping them.*_
+
+* Go into the @ec2@ directory in the release of Mesos you downloaded.
+* Run @./mesos-ec2 destroy <cluster-name>@.
+
+h1. Pausing and Restarting EBS-Backed Clusters
+
+The @mesos-ec2@ script also supports pausing a cluster if you are using EBS-backed virtual machines (which all of our machine images are by default). In this case, the VMs are stopped but not terminated, so they _*lose all data on ephemeral disks (/mnt, ephemeral-hdfs)*_ but keep the data in their root partitions and their @persistent-hdfs@. Stopped machines will not cost you any EC2 cycles, but _*will*_ continue to cost money for EBS storage.
+
+* To stop one of your clusters, go into the @ec2@ directory and run @./mesos-ec2 stop <cluster-name>@.
+* To restart it later, run @./mesos-ec2 -i <key-file> start <cluster-name>@.
+* To ultimately destroy the cluster and stop consuming EBS space, run @./mesos-ec2 destroy <cluster-name>@ as described in the previous section.
+
+h1. Limitations
+
+* The @mesos-ec2@ script currently does not use the [[deploy scripts]] included with Mesos to manage its clusters. This will likely be fixed in the future.
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Event-history.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Event-history.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Event-history.md (added)
+++ incubator/mesos/trunk/docs/Event-history.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,3 @@
+This is a feature [[currently being developed|https://github.com/mesos/mesos/tree/andyk-event-history-nowebui]] that will allow capture the history of significant events in the context of all things Mesos (e.g. Jobs/tasks registering, starting, failing, etc.) into a variety of places where it can be used in the future.
+
+This is still under development.
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Hadoop-demo-VM.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Hadoop-demo-VM.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Hadoop-demo-VM.md (added)
+++ incubator/mesos/trunk/docs/Hadoop-demo-VM.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,79 @@
+## Mesos VMWare Image with Hadoop pre-installed
+
+***
+Last Updated: August 2011
+
+[DOWNLOAD](http://amplab.cs.berkeley.edu/downloads/mesos/mesos-demo.tar.bz2)
+
+**Note: This VM download is approximately 1.7GB. Feel Free to mirror internally or externally to minimize bandwidth usage. The Uncompressed version of VM required 6GB of disk space.**
+
+To make it easy for you to get started with Apache Mesos, we created a virtual machine with everything you need. our VM runs Ubuntu 10.04 LTS - Long-term support 64-bit (Lucid Lynx) and Mesos with Apache Hadoop 0.20.2.
+
+To launch the VMWare image, you will either need [VMware Player for Windows and Linux](http://www.vmware.com/go/downloadplayer/), or [VMware Fusion for Mac](http://www.vmware.com/products/fusion/). (Note that VMware Fusion only works on Intel architectures, so older Macs with PowerPC processors cannot run the training VM.)
+
+Once you launch the VM, log in with the following account details:  
+
+  - username: hadoop  
+  - password: hadoop
+
+The **hadoop** account has *sudo* privileges in the VM.
+
+### 1. Testing Mesos  
+* run ` ~/mesos$ bin/tests/all-tests `  
+
+```
+~/mesos$ bin/tests/all-tests   
+[==========] Running 61 tests from 6 test cases.  
+[----------] Global test environment set-up.  
+[----------] 18 tests from MasterTest  
+[ RUN      ] MasterTest.ResourceOfferWithMultipleSlaves  
+[       OK ] MasterTest.ResourceOfferWithMultipleSlaves (33 ms)  
+[ RUN      ] MasterTest.ResourcesReofferedAfterReject  
+[       OK ] MasterTest.ResourcesReofferedAfterReject (3 ms)  
+  
+[ ... trimmed ... ]  
+  
+[ RUN      ] MasterTest.MultipleExecutors  
+[       OK ] MasterTest.MultipleExecutors (2 ms)  
+[----------] 18 tests from MasterTest (38 ms total)  
+  
+[----------] Global test environment tear-down  
+[==========] 61 tests from 6 test cases ran. (17633 ms total)  
+[  PASSED  ] 61 tests.   
+  YOU HAVE 3 DISABLED TESTS    
+``` 
+
+### 2. Start all your frameworks!
+* Start Mesos's Master:      
+` ~/mesos$ bin/mesos-master &`  
+* Start Mesos's Slave:       
+` ~/mesos$ bin/mesos-slave --url=mesos://master@localhost:5050 &`  
+* Start Hadoop's namenode:  
+` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop namenode &`  
+* Start Hadoop's datanode:  
+` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop datanode &`  
+* Start Hadoop's jobtracker:  
+` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jobtracker &`
+
+### 3. Run the MapReduce job:  
+   We will now run your first Hadoop MapReduce job. We will use the [WordCount](http://wiki.apache.org/hadoop/WordCount) example job which reads text files and counts how often words occur. The input is text files and the output is text files, each line of which contains a word and the count of how often it occurred, separated by a tab.  
+
+* Run the "wordcount" example MapReduce job:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jar build/hadoop-0.20.3-dev-examples.jar wordcount /user/hadoop/g  /user/hadoop/output`  
+* You will see something like the following:  
+
+
+11/07/19 15:34:29 INFO input.FileInputFormat: Total input paths to process : 6
+11/07/19 15:34:29 INFO mapred.JobClient: Running job: job_201107191533_0001
+11/07/19 15:34:30 INFO mapred.JobClient:  map 0% reduce 0%
+11/07/19 15:34:43 INFO mapred.JobClient:  map 16% reduce 0%
+
+[ ... trimmed ... ]
+
+Click the Firefox Web Browser on the Panel to view Mesos's documentation.
+The browser also provides the following bookmarks:   
+
+   *  [http://localhost:50030](http://localhost:50030) - web UI for MapReduce job tracker(s)  
+   *  [http://localhost:50060](http://localhost:50060) - web UI for task tracker(s)  
+   *  [http://localhost:50070](http://localhost:50070) - web UI for HDFS name node(s)  
+   *  [http://localhost:8080](http://localhost:8080) - web UI for Mesos master  
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Home.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Home.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Home.md (added)
+++ incubator/mesos/trunk/docs/Home.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,110 @@
+# Overview of Mesos
+
+Mesos is a cluster management platform that provides resource sharing and isolation across distributed applications. For example, [Hadoop MapReduce](http://hadoop.apache.org), [MPI](http://www.mcs.anl.gov/research/projects/mpich2/), [Hypertable](http://hypertable.org), and [Spark](http://github.com/mesos/spark/wiki) (a new MapReduce-like framework from the Mesos team that supports low-latency interactive and iterative jobs) can run on Mesos.
+
+#### Mesos Use-cases
+* Run Hadoop, MPI, Spark and other cluster applications on a dynamically shared pool of machines
+* Run **multiple instances of Hadoop** on the same cluster, e.g. separate instances for production and experimental jobs, or even multiple versions of Hadoop
+* Build new cluster applications without having to reinvent low-level facilities for running tasks on different nodes, monitoring them, etc., and have your applications coexist with existing ones
+
+#### Features of Mesos
+* Master fault tolerance using ZooKeeper
+* Isolation between tasks using Linux Containers
+* Memory and CPU aware allocation
+* Efficient fine-grained resource sharing abstraction (<i>resource offers</i>) that allows applications to achieve app-specific scheduling goals (e.g. Hadoop optimizes for data locality)
+<br/>
+Visit [mesosproject.org](http://mesosproject.org) for more details about Mesos.
+
+_**Please note that Mesos is still in beta. Though the current version is in use in production at Twitter, it may have some stability issues in certain environments.**_
+
+#### What you'll find on this page
+1. Quick-start Guides
+1. System Requirements
+1. Downloading Mesos
+1. Building Mesos
+1. Testing the Build
+1. Deploying to a Cluster
+1. Where to Go From Here
+
+# System Requirements
+
+Mesos runs on Linux and Mac OS X, and has previously also been tested on OpenSolaris. You will need the following packages to run it:
+
+* g++ 4.1 or higher.
+* Python 2.6 (for the Mesos web UI). On Mac OS X 10.6 or earlier, get Python from [MacPorts](http://www.macports.org/) via `port install python26`, because OS X's version of Python is not 64-bit.
+* Python 2.6 developer packages (on red-hat - sudo yum install python26-devel python-devel)
+* cppunit (for building zookeeper) (on red-hat - sudo yum install cppunit-devel)
+* Java JDK 1.6 or higher. Mac OS X 10.6 users will need to install the JDK from http://connect.apple.com/ and set `JAVA_HEADERS=/System/Library/Frameworks/JavaVM.framework/Versions/A/Headers`.
+
+# Downloading and Building Mesos
+
+Mesos uses the standard GNU build tools. See the section farther below for instructions for checking out and building the source code via source control.
+
+To get running with Mesos version 0.9.0-incubating, our most recently release:
+
+1. [Download Mesos 0.9.0-incubating](http://www.apache.org/dyn/closer.cgi/incubator/mesos/mesos-0.9.0-incubating/)
+1. Run configure (there are a few different helper scripts that wrap the `configure` script called configure.<type-of-os>)
+    1. In OS X: run `./configure.macosx`. If you're running Mountain Lion, you may need to follow the instructions [here](https://issues.apache.org/jira/browse/MESOS-261?focusedCommentId=13447058&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13447058) and [here](https://issues.apache.org/jira/browse/MESOS-285).
+    1. In Linux, you can probably just run `./configure --with-webui --with-included-zookeeper`. These flags are what we recommend; advanced users may want to exclude these flags or use others, see [[Mesos Configure Command Flag Options]].
+1. run `make`
+
+### NOTES:
+* In the SVN trunk branch since Jan 19 2012 (when Mesos switched fully to the GNU Autotools build system), the build process attempts to guess where your Java include directory is, but if you have set the $JAVA_HOME environment variable, it will use $JAVA_HOME/include, which may not be correct (or exist) on your machine (in which case you will see an error such as: "configure: error: failed to build against JDK (using libtool)"). If this is the case, we suggest you unset the JAVA_HOME environment variable.
+* `configure` may print a warning at the end about "unrecognized options: --with-java-home, ...". This comes from one of the nested `configure` scripts that we call, so it doesn't mean that your options were ignored.
+* (NOT SURE IF THIS IS STILL RELEVANT) On 32-bit platforms, you should set `CXXFLAGS="-march=i486"` when running `configure` to ensure certain atomic instructions can be used.
+
+# Checking Mesos Source Code out of Git or SVN
+
+Currently, you can obtain the current Mesos development HEAD by checking it out from either the Apache SVN or Apache Git repository (the git repo is a mirror of the SVN repo)
+* For SVN, use: `svn co https://svn.apache.org/repos/asf/incubator/mesos/trunk mesos-trunk`
+* For git, use: `git clone git://git.apache.org/mesos.git`
+
+# Running Example Frameworks and Testing the Build
+
+Currently, in order to run the example frameworks (in src/examples), you must first build the test suite, as instructed below.
+
+After you build Mesos by running the `make` command, you can build and run its example frameworks and unit tests (which use the example frameworks) by issuing the `make check` command from the directory where you ran the `make` command.
+
+After you have done this, you can also set up a small Mesos cluster and run a job on it as follows:
+
+1. Go into the directory where you built Mesos.
+1. Type `bin/mesos-master.sh` to launch the master.
+1. Take note of the IP and port that the master is running on, which will look something like <code>192.168.0.1:5050</code>. <i>Note: In this example the IP address of master is 192.168.0.1, and the port is 5050. We will continue to use the URL shown here for the rest of this example; however when you run the following commands replace all instances of it with the URL of your master.</i>
+1. URL of master: <code>192.168.0.1:5050</code>
+1. View the master's web UI at `http://[hostname of master]:5050`.
+1. Launch a slave by typing <code>bin/mesos-slave.sh --master=192.168.0.1:5050</code>. The slave will show up on the master's web UI if you refresh it.
+1. Run the C++ test framework (a sample that just runs five tasks on the cluster) using <code>src/test-framework 192.168.0.1:5050</code>. It should successfully exit after running five tasks.
+1. You can also try the example python or Java frameworks, with commands like the following:
+  2. `src/examples/java/test-framework 192.168.0.1:5050`
+  2. `src/examples/python/test-framework 192.168.0.1:5050`
+
+# Deploying to a Cluster
+
+To run Mesos on more than one machine, you have multiple options:
+
+* To launch a cluster on a private cluster that you own, use Mesos's [[deploy scripts]]
+* To launch a cluster on Amazon EC2, you can use the Mesos [[EC2 scripts]]
+
+# Where to Go From Here
+
+* [[Mesos architecture]] -- an overview of Mesos concepts.
+* [[Mesos Developers guide]] -- resources for developers contributing to Mesos. Style guides, tips for working with SVN/git, and more!
+* [[App/Framework development guide]] -- learn how to build applications on top of Mesos.
+* [[Configuring Mesos|Configuration]] -- a guide to the various settings that can be passed to Mesos daemons.
+* Mesos system feature guides:
+    * [[Deploy scripts]] for launching a Mesos cluster on a set of machines.
+    * [[EC2 scripts]] for launching a Mesos cluster on Amazon EC2.
+    * [[Logging and Debugging]] -- viewing Mesos and framework logs.
+    * [[Using ZooKeeper]] for master fault-tolerance.
+    * [[Using Linux Containers]] for resource isolation on slaves.
+* Guide to running existing frameworks:
+    * [[Running Spark on Mesos|https://github.com/mesos/spark/wiki]]
+    * [[Using the Mesos Submit tool]]
+    * [[Using Mesos with Hypertable on EC2|http://code.google.com/p/hypertable/wiki/Mesos]] (external link) - Hypertable is a distributed database platform.
+    * [[Running Hadoop on Mesos]]
+    * [[Running a web application farm on Mesos]]
+    * [[Running Torque or MPI on Mesos]]
+* [[Powered-by|Powered-by Mesos]] -- Projects that are using Mesos!
+* [[Mesos code internals overview|Mesos Code Internals]] -- an overview of the codebase and internal organization.
+* [[Mesos development roadmap|Mesos Roadmap]]
+* [[The official Mesos website|http://mesosproject.org]]
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Logging-and-Debugging.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Logging-and-Debugging.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Logging-and-Debugging.textile (added)
+++ incubator/mesos/trunk/docs/Logging-and-Debugging.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,5 @@
+Mesos uses the "Google Logging library":http://code.google.com/p/google-glog and writes logs to @MESOS_HOME/logs@ by default, where @MESOS_HOME@ is the location where Mesos is installed. The log directory can be [[configured|Configuration]] using the @log_dir@ parameter.
+
+Frameworks that run on Mesos have their output stored to a "work" directory on each machine. By default, this is @MESOS_HOME/work@. Within this directory, a framework's output is placed in files called @stdout@ and @stderr@ in a directory of the form @slave-X/fw-Y/Z@, where X is the slave ID, Y is the framework ID, and multiple subdirectories Z are created for each attempt to run an executor for the framework. These files can also be accessed via the web UI of the slave daemon.
+
+Finally, using an [[alpha code branch|https://github.com/mesos/mesos/tree/andyk-event-history-nowebui]] which (at the time of this writing) is still in an alpha state, structured logs of cluster activity can be written to a tab-separated text file or a SQLite database using the [[event history]] system.

Added: incubator/mesos/trunk/docs/Mesos-Architecture.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-Architecture.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-Architecture.md (added)
+++ incubator/mesos/trunk/docs/Mesos-Architecture.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,26 @@
+<p align="center">[[/images/architecture3.jpg|height=300px]]</p>
+
+The above figure shows the main components of Mesos.  Mesos consists of a <i>master</i> daemons that manages <i>slave</i> daemons running on each cluster node, and <i>mesos applications</i> (also called <i>frameworks</i>) that run <i>tasks</i> on these slaves. 
+
+The master enables fine-grained sharing of resources (cpu, ram, ...) across applications by making them <i>resource offers</i>. Each resource offer contains a list of &lt;slave ID, resource1: amount1, resource2, amount2, ...&gt;.  The master decides <i>how many</i> resources to offer to each framework according to a given organizational policy, such as fair sharing, or strict priority. To support a diverse set of policies, the master employs a modular architecture that makes it easy to add new allocation modules via a plugin mechanism.
+
+A framework running on top of Mesos consists of two components: a <i>scheduler</i> that registers with the master to be offered resources, and an <i>executor</i> process that is launched on slave nodes to run the framework's tasks (see the [[App/Framework development guide]] for more details about application schedulers and executors). While the master determines <b>how many</b> resources are offered to each framework, the frameworks' schedulers select <b>which</b> of the offered resources to use. When a frameworks accepts offered resources, it passes to Mesos a description of the tasks it wants to run on them. In turn, Mesos launches the tasks on the corresponding slaves.
+
+## Example of resource offer 
+
+The figure below shows an example of how a framework gets scheduled to run a task.
+
+<p align="center">[[/images/architecture-example.jpg|height=300px]]</p>
+
+Let's walk through the events in the figure.
+
+1. Slave 1 reports to the master that it has 4 CPUs and 4 GB of memory free. The master then invokes the allocation policy module, which tells it that framework 1 should be offered all available resources.
+1. The master sends a resource offer describing what is available on slave 1 to framework 1.  
+1. The framework's scheduler replies to the master with information about two tasks to run on the slave, using &lt;2 CPUs, 1 GB RAM&gt; for the first task, and &lt;1 CPUs, 2 GB RAM&gt; for the second task. 
+1. Finally, the master sends the tasks to the slave, which allocates appropriate resources to the framework's executor, which in turn launches the two tasks (depicted with dotted-line borders in the figure). Because 1 CPU and 1 GB of RAM are still unallocated, the allocation module may now offer them to framework 2.
+
+In addition, this resource offer process repeats when tasks finish and new resources become free.
+
+While the thin interface provided by Mesos allows it to scale and allows the frameworks to evolve independently, one question remains: how can the constraints of a framework be satisfied without Mesos knowing about these constraints? For example, how can a framework achieve data locality without Mesos knowing which nodes store the data required by the framework? Mesos answers these questions by simply giving frameworks the ability to <b>reject</b> offers. A framework will reject the offers that do not satisfy its constraints and accept the ones that do.  In particular, we have found that a simple policy called delay scheduling \cite{delay-scheduling}, in which frameworks wait for a limited time to acquire nodes storing the input data, yields nearly optimal data locality.
+
+You can also read much more about the Mesos architecture in this [[technical paper|http://mesos.berkeley.edu/mesos_tech_report.pdf]].

Added: incubator/mesos/trunk/docs/Mesos-Code-Internals.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-Code-Internals.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-Code-Internals.textile (added)
+++ incubator/mesos/trunk/docs/Mesos-Code-Internals.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,32 @@
+h1. Mesos Code Internals
+
+h2. Top-level directories in the Mesos distribution
+
+* ec2 - scripts for launching a Mesos cluster on EC2. See the wiki page on [[EC2-Scripts]]
+* frameworks - Included Mesos Frameworks. See the READMEs in each one. See the [[App/Framework development guide]] for a crash course in how Mesos Frameworks get resources from the Mesos master.
+* include - Contains headers that contain the interfaces that Mesos users need in order to interact with Mesos (e.g. the Mesos Framework API)
+* src - Contains the entire Mesos source tree. See below for more details about the directories inside of @src@.
+* third_party - Contains necessary open source third party software that Mesos leverages for things such as logging, etc.
+
+h2. Mesos source code 
+
+The Mesos source code (found in @MESOS_HOME/src@) is organized into the following hierarchy:
+
+* common - shared source files (such as utilities and data structures)
+* conf - where the mesos configuration file @mesos.conf@ lives
+* config - 
+* configurator - 
+* deploy
+* detector
+* examples
+* exec
+* launcher
+* local
+* master - source files specific to the mesos-master daemon
+* messaging
+* scaling
+* sched - source files specific to the mesos-slave daemon
+* slave
+* swig - WARNING: this might go away (at least for Java, as of the proto buffer commit, see issue #3
+* tests
+* webui
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Mesos-Roadmap.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-Roadmap.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-Roadmap.md (added)
+++ incubator/mesos/trunk/docs/Mesos-Roadmap.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,54 @@
+The Mesos development roadmap can roughly be separated into 3 main areas:
+
+1. Building a community and promoting adoption
+1. Core Mesos development
+1. Mesos Application development
+
+## 1. Growing a development community and promoting adoption
+1. Migrate into Apache Incubator (see the [[incubator proposal|http://wiki.apache.org/incubator/MesosProposal]])
+    1. Migrate issues from github issues into JIRA
+1. Documentation management and organization
+
+## 2. Core Mesos (e.g. scheduling, resource management, etc.)
+1. More advanced allocation modules that implement the following functionality
+    1. Resource revocation
+    1. Resource inheritance, hierarchical scheduling
+    1. A Mesos Service Level Objective
+    1. Scheduling based on history
+1. Migrate to ProtocolBuffers as the serialization format (under development at Twitter)
+1. User authentication
+1. Mesos application debugging support
+    1. More advanced User Interface and [[Event History]] (i.e. logging) - See [[issue #143|https://github.com/mesos/mesos/issuesearch?state=open&q=event#issue/143]] for more details.
+1. Testing infrastructure, and more tests!
+
+## 3. Mesos applications
+
+This category of future work is probably the most important! Technically speaking, a mesos application is defined as a mesos scheduler plus a mesos executor. Practically speaking, applications can serve many purposes. These can be broken down into a few categories.
+
+### 3.1 Applications providing cluster OS functionality (e.g. storage, synchronization, naming...)
+
+The core of Mesos has been designed using the same philosophy behind traditional [[Microkernel operating systems|http://en.wikipedia.org/wiki/Microkernel]]. This means that the core of Mesos (the kernel, if you will) provides a minimal set of low-level abstractions for cluster resources (see [[Mesos Architecture]] for an introduction to the resource offer abstraction). This, in turn means that higher level abstractions (analogous to the filesystem, memory sharing, etc. in traditional operating systems, as well as some abstractions that have no analog in traditional single node operating systems such as DNS).
+
+those which are primarily intended to be used by other applications (as opposed to being used by human users). They can be seen as operating system modules that implement functionality in the form of services.
+
+1. Meta Framework Development (under development at Berkeley, and also at Twitter)
+    1. Allow users to submit a job (specifying their resource constraints) and have the job wait in a queue until the resources are acquired (the Application Scheduler translates those constraints into accepting or rejecting resource offers)
+1. Slave data storage/caching services (a generalization of MapReduce's map output server)
+1. Distributed file systems, like HDFS
+1. Further Hypertable integration
+1. Mesos package management application (i.e. the "apt-get" of the cluster... `apt-get install hadoop-0.20.0`)
+1. A machine metadata database
+
+### 3.2 Applications providing user facing services (e.g. web apps, PL abstractions...)
+
+This category of applications is intended to interface with users. Due to nature of distributed applications (i.e. vs. what can be solved by simply using single computer applications) these apps tend to either (a) serving thousands to millions of users at a time (e.g. web applications), (b) large parallel computations (like MPI style jobs), (c) data intensive (e.g. enabling data analytics at large scale), or some combination of the above.
+
+1. Spark
+1. A graph processing framework? 
+1. A streaming database framework?
+1. Web applications (that server users)
+1. High performance computing, MPI style jobs...
+
+## Research Roadmap
+
+In addition to a popular, (increasingly) stable, and (soon) production system, Mesos is also a research project that is pushing the borders of distributed system research! Check out <a href="http://mesosproject.org/about.html">the "About Mesos" page</a> for links to Mesos research papers, experiments, and other information about Mesos research.

Added: incubator/mesos/trunk/docs/Mesos-c++-style-guide.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-c%2B%2B-style-guide.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-c++-style-guide.md (added)
+++ incubator/mesos/trunk/docs/Mesos-c++-style-guide.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,13 @@
+We follow the [Google C++ Style Guide](http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml) with the following differences:
+
+### Variable Names
+We use lowerCamelCase[1] for variable names (Google uses snake_case, and their class member variables have trailing underscores).
+
+### Constant Names
+We use lowerCamelCase[1] for constant names (Google uses a k followed by mixed case, e.g. kDaysInAWeek).
+
+### Function Names
+We use lowerCamelCase[1] for function names (Google uses mixed case for regular functions; and their accessors and mutators match the name of the variable).
+
+## References
+[1] - http://en.wikipedia.org/wiki/CamelCase#Variations_and_synonyms
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Mesos-configure-command-flag-options.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-configure-command-flag-options.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-configure-command-flag-options.md (added)
+++ incubator/mesos/trunk/docs/Mesos-configure-command-flag-options.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,9 @@
+NOTE: The documentation below is only for convenience of reference. <i>As with any documentation, the following may become stale</i>. If you discover it is wrong, please: (1) run the command `./configure --help` and treat the printed output as the true source of the configure flag options, and (2) either edit this wiki page or send mail to mesos-dev@incubator.apache.org to point out the error.
+
+The configure script itself accepts the following arguments to enable various options:
+
+* `--with-python-headers=DIR`: Find Python header files in `DIR` (to turn on Python support). Recommended.
+* `--with-webui`: Enable the Mesos web UI (which requires Python 2.6). Recommended.
+* `--with-java-home=DIR`: Enable Java application/framework support with a given installation of Java. Required for Hadoop and Spark.
+* `--with-java-headers=DIR`: Find Java header files (necessary for newer versions of OS X Snow Leopard).
+* `--with-included-zookeeper` or `--with-zookeeper=DIR`: Enable master fault-tolerance using an existing ZooKeeper installation or the version of ZooKeeper bundled with Mesos. For details, see [[using ZooKeeper]].

Added: incubator/mesos/trunk/docs/Mesos-developers-guide.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-developers-guide.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-developers-guide.md (added)
+++ incubator/mesos/trunk/docs/Mesos-developers-guide.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,70 @@
+**This page is up to date as of 2011, Nov 23**
+
+## How to contribute code
+1. Check out the code from the Apache repository either via Git or SVN, get it to build on your machine (instructions on at the wiki [[Home]], we don't currently support building on Windows)
+    2. If you encounter a bug while building, report it (see below for instructions)
+    2. Right now, all active development happens against "trunk" (we aren't yet, as of 2011/11/23, using SVN branches or tags).
+
+1. Join the mesos-dev@incubator.apache.org mailing list by sending an email to mesos-dev-subscribe@incubator.apache.org
+
+1. Find a JIRA that is currently unassigned that you want to work on at http://issues.apache.org/jira/browse/MESOS, or create your own (you'll need a JIRA account for this, see below)!
+    2. This could be a JIRA representing a bug (possibly a bug that you encountered and reported, e.g. when trying to build) or a new feature
+
+1. Assign the JIRA to yourself. To do this, you will need:
+    2. An Apache JIRA user account (sign up for one [here](https://issues.apache.org/jira/secure/Signup!default.jspa))
+    2. You need to be added to the list of Mesos "contributors" by a Mesos committer (send email to mesos-dev@incubator.apache.org) in order to be assigned (or to assign yourself) to a JIRA issue
+
+1. Formulate a plan for resolving the issue, propose your plan via comments in the JIRA
+
+1. Create one or more test cases to exercise the bug or the feature (the Mesos team uses [test-driven development](http://en.wikipedia.org/wiki/Test-driven_development)), before you start coding, make sure these test cases all fail.
+
+1. Make your changes to the code (using whatever IDE/editor you choose) to actually fix the bug or implement the feature.
+    2. Before beginning, please read the [[Mesos C++ Style Guide]]
+    2. Most of your changes will probably be to files inside of &lt;BASE_MESOS_DIR&gt;/src
+    2. To build, we recommend that you don't build inside of the src directory. We recommend you do the following:
+        3. From inside of the root Mesos directory: `mkdir build && cd build`
+        3. `../configure.<TEMPLATE_SUFFIX_FOR_YOUR_MACHINE_AND_ENV>` (e.g. `configure.macosx`)
+        3. `make`
+        3. now all of the files generated by the build process will be contained in the build directory you created, instead of being spread throughout the src directory, which is a bit messier. This is both cleaner, and makes it easy to clean up if you want to get rid of the files generated by `configure` and `make`. I.e. You can reset your build process without risking changes you made in the src directory, by simply deleting the build directory, and creating a new one.
+
+1. Make sure all of your test cases now pass.
+
+1. Think you're code is ready for review?
+    2. Make sure to pull in any changes that have been committed to trunk. If you are using Git, do this via something like:
+        3. `git remote update`
+        3. `git rebase origin/trunk`
+        3. Check the output of `git diff origin/trunk` and make sure it lists only your changes. If other changes you did not make are listed, try a merge to bring your branch up to date with origin/trunk.
+    2. Make sure it is building and all test cases are passing (run bin/tests/all-tests). To be extra sure, before moving on to requesting a review request, we recommend running `make clean && make` and then running the test cases.
+
+1. Create a diff of your code changes, including your test case.
+    2. If you are using Git, and you haven't committed any of your changes to your local clone of the repository, use the command: `git diff > MESOS-XX.patch` where XX is your issue number
+    2. If you are using SVN, use the command: `svn diff > MESOS-XX.patch`
+
+1. Create a <i>review request</i> at Review Board (RB), http://reviews.apache.org, and attach your diff. More detailed instructions:
+    2. Log in or create an account at reviews.apache.org (Apache's infrastructure doesn't have any sort of single-sign-in process, so you need to have an account specific to this instance of RB)
+    2. Create a new review request at https://reviews.apache.org/r/new/
+        3. Under the field labeled "Repository" choose "Mesos"
+        3. Leave the field labeled "Base Directory" blank
+        3. Upload your diff via the field labeled "Diff"
+        3. Be sure to add your JIRA issue id (e.g. MESOS-01) to the field labeled "Bugs" (this will automatically link)
+        3. Under "Description" in addition to details about your changes, include a description of any wiki documentation pages need to be added, or are affected by your changes (e.g. did you change or add any configuration options/flags? Did you add a new binary?)
+
+1. Wait for a code review from another Mesos developer via Review Board, address their feedback and upload updated patches until you receive a "Ship It" from a Mesos committer.
+    2. Review Board comments should be used for code-specific discussions, and JIRA comments for bigger-picture design discussions.
+    2. Always respond to each RB comment that you address directly (i.e. each comment can be responded to directly) with either "Done." or a comment explaining how you addressed it.
+
+1. After consensus is reached on your JIRA/patch, you're review request will receive a "Ship It!" from a committer, and then a committer will commit your patch to the SVN repository. Congratulations and thanks for participating in our community!
+
+1. Please ensure that the necessary wiki documentation gets created or updated (i.e. make the changes yourself!)
+
+## Guidelines for using JIRA
+* We track all issues via Apache's hosted JIRA issue tracker: https://issues.apache.org/jira/browse/MESOS
+* A JIRA should be created for every task, feature, bug-fix, etc.
+* Always assign the JIRA to yourself before you start working on it. This helps to avoid duplication of work
+
+## Using Review Board
+* A code review request should be created for every JIRA that involves a change to the codebase
+
+## Style Guides
+* [[Mesos C++ Style Guide]]
+* [[Mesos Python Style Guide]]
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Mesos-ready-to-go-AMI.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Mesos-ready-to-go-AMI.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Mesos-ready-to-go-AMI.md (added)
+++ incubator/mesos/trunk/docs/Mesos-ready-to-go-AMI.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,65 @@
+<font size="3em"><table><tr><td>The most recent functional version of the "ready to go" AMI:</td><td><font size="2em"><b>ami-8a38c8e3</b></font></td></tr></table></font>
+
+There are two main ways to get a Mesos cluster running on EC2 quickly and easily. One way is via the Mesos [[EC2-Scripts]]. The main guts of the EC2 Scripts are the python program in <mesos-download-root-dir>/src/ec2/mesos_ec2.py which will start up a number of Amazon EC2 Instances (these images already contain a copy of mesos at /root/mesos), and then SSH into those machines, set up the configuration files on the slaves to talk to the master, and also set up HDFS, NFS, and more on those nodes!
+
+The other main way to launch a Mesos cluster on EC2 is using the Mesos "Ready-to-go" AMI. We have set up this AMI to make taking Mesos for a spin as easy as launching a some instances in EC2. That is, we have pre-packaged an AMI with /etc/init.d/mesos-master and /etc/init.d/mesos-slave scripts that make running a Mesos master or slave on an instance of this AMI super easy!
+
+<i><b>WARNING:</b> While this feature is in Alpha, this AMI has the public ssh keys of some Mesos developers in the .ssh/authorized_keys file for now which would be a security vulnerability if you use these AMIs and don't want those folks to have access to ssh into your instances! You can always remove these entries after you boot the instance, and even re-bundle the AMI if you plan on reusing this functionality.</i>
+
+## Prerequisite: Have a functional EC2 account
+
+Here are some high level instructions for getting started with Amazon EC2:
+
+1. Set up you Amazon EC2 user account
+1. Download the [[Amazon EC2 API tools|http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9]] or install [[Elasticfox|http://aws.amazon.com/developertools/609?_encoding=UTF8&jiveRedirect=1]]
+1. Set up your EC2 credentials and try out the tools by running: `ec2-describe-instances`
+1. Learn about security groups (which are basically how you set up a firewall on your node) because you'll need to open some ports in order for your Mesos slaves and master to talk to each other (and also to view the webui of the Master)
+
+## How to use the "ready to go" AMI to run a Mesos Master node
+
+1. Run `ec2-run-instances <AMI-IDNUM>` -- see above for most recent AMI-IDNUM to use
+1. SSH into that machine as root and run: `service mesos-master start`
+1. Your master should be running now. You should be able to look at the mesos-master log file at /mnt/mesos-master.INFO. You should do that right now and look for the libprocess PID (the PID should look something like <Integer>@<ip-address|hostname>:<port-num>, e.g. `1@ip-10-126-43-201.ec2.internal:5050`) of the master so that you can pass it to the slaves you're going to start next! You should also now be able to view the Master webui, which by default is viewable at `http://<instance public DNS name>:8080`.
+
+<b>Note:</b> We currently use the same AMI for running a master or running a slave, or both! The /etc/init.d/mesos-slave script will run automatically when the virtual machine boots up, and it will read /usr/local/mesos/conf/mesos.conf. If this file does not contain a line which looks like "url=<lib process PID of mesos master>" then the slave will not run successfully. Thus, the way to use the AMI as a master is simply to <i>not pass any user data to the AMI at bootup</i>, then SSH into the master and start the mesos-master daemon with a single command as described above.
+
+## How to use the "ready to go" AMI to run a Mesos Slave node
+
+1. Run the slave AMI and pass in [[user data|http://docs.amazonwebservices.com/AWSEC2/2007-03-01/DeveloperGuide/AESDG-chapter-instancedata.html]] containing the url of a running Mesos master: `ec2-run-instances <AMI-IDNUM> -d url=1@<master-ip or hostname>:<master port> #see above for most recent AMI-IDNUM to use`
+
+## History of Mesos AMIs
+The table below contains more details on the version history of this "ready to go" AMI, which is updated regularly, so check back often!
+
+<table>
+  <tr>
+    <th>Date/Time</th><th>AMI ID</th><th>S3 bucket and/or URL</th><th>Description and Notes</th><th>Bugs/Issues</th>
+  </tr>
+  <tr>
+    <td><b>2/20/11 15:00, Sun</b></td><td><b>ami-44ce3d2d</b></td><td>http://andyk-mesos-images.s3.amazonaws.com/mesos-slave-master-v3.1</td><td>Updated /etc/init.d/mesos-slave to export MESOS_PUBLIC_DNS=<magic that wgets the public dns name from ec2> env var before launching the slave daemon so now the externally accessible ULRs are reported to the master (and shown on the master's webui). Also updated the spark installation per Justin's instructions.</td><td></td>
+  </tr>
+  <tr>
+    <td><b>2/4/11 18:12, Fri</b></td><td><b>ami-967b8bff</b></td><td>http://andyk-mesos-images.s3.amazonaws.com/mesos-slave-master-v6</td><td>Fixed <i>~/.deploylib_tags</i> bug in last 2 AMIs. <b>DON'T USE. Slave webui crashes.</b></td><td>We are seeing the following error when trying to connect to the webui for slaves:Error 500: Internal Server Error. Sorry, the requested URL http://ec2-174-129-58-11.compute-1.amazonaws.com:8080/framework/201102090027-0-0000 caused an error:Unhandled exception</td>
+  </tr>
+  <tr>
+    <td><b>2/4/11 14:43, Fri</b></td><td><b>ami-767a8a1f</b></td><td>http://andyk-mesos-images.s3.amazonaws.com/mesos-slave-master-v5</td><td>Fixed ~/.tags bug in last AMI</td><td>The file is actually called .deploylib_tags, so this didn't fix the bug afterall.</td>
+  </tr>
+  <tr>
+    <td><b>2/4/11, Fri.</b></td><td><b>ami-b87383d1</b></td><td>http://mesos-slave-master-v4.s3.amazonaws.com/</td><td>Andy rolled a new AMI with mesos Event History functionality installed and enabled by default. Check out the new line in the config file at /usr/local/mesos/conf/mesos.conf which says "event_history_sqlite=1". Also check out the two new files (one txt and one sqlite3) storing task and framework history events: /mnt/event_history_db.sqlite3 and /mnt/event_history_log.txt</td><td>The ~/.tags director(/file?) was left on the image, needs to be removed by EC2Instance.bundleNewAMI() before ec2-bundle-volume is called.</td>
+  </tr>
+  <tr>
+    <td><b>1/30/11, Sat.</b></td><td><b>ami-8a38c8e3</b></td><td>http://andyk-mesos-images.s3.amazonaws.com/mesos-slave-master-v3</td><td>Andy and Michael created a new AMI using the shiney new deploylib functionality!</td><td></td>
+  </tr>
+  <tr>
+    <td><b>1/29/11, Sat.</b><td><b>ami-6a37c703</b></td></td><td>http://andyk-mesos-images.s3.amazonaws.com/mesos-slave-master-v2</td><td>This image has nginx added (which was set up by Justin Ma) and the most recent version of Mesos (using the radlab-demo branch)</td><td><i>DON'T USE THIS</i>: andyk forgot to run `make install`</td>
+  </tr>
+  <tr>
+    <td><b>1/6/11, Thu.</b></td><td><b>ami-5a26d733</b></td><td>andyk-mesos-images/mesos-slave-master-v1<</td><td>Michael and Beth updated Mesos on that image. I added the /etc/init.d/mesos-master script as well as the /etc/default/mesos file that <b>ENABLE</b>s mesos. It doesn't run mesos-master at OS startup, but you should be able to run a master.</td><td></td>
+  </tr>
+  <tr>
+    <td><b>Dec 2010.</b></td><td><b>ami-58798d31</b></td><td>andyk-mesos-images/mesos-slave-v6</td><td>Bundled, uploaded, and registered image in s3 bucket (see image.manifest.xml inside of that for the meta data about AMI parts)</td><td></td>
+  </tr>
+  <tr>
+    <td><b>Fall 2010</b></td><td><b>ami-60798d09</b></td><td>mesos_images5</td><td>Created new AMI in s3 bucket (see mesos_images5/image.manifest.xml)</td><td></td>
+  </tr>
+  <tr>
+</table>
\ No newline at end of file