You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mesos.apache.org by be...@apache.org on 2013/03/03 20:37:41 UTC

svn commit: r1452107 [2/2] - /incubator/mesos/trunk/docs/

Added: incubator/mesos/trunk/docs/Old-mesos-build-instructions.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Old-mesos-build-instructions.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Old-mesos-build-instructions.md (added)
+++ incubator/mesos/trunk/docs/Old-mesos-build-instructions.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,16 @@
+We migrated the Mesos build system on Jan 19th 2012 to using Autotools (SVN commit #1233580, which is equivalent to Git-SVN commit #ebaf069611abf23266b009c3516da4b3cccccb8d). If you are using a version of Mesos from before that commit, checked out from Apache SVN (possibly via git-svn), then you probably need to follow these build instructions:
+
+<b><font size="4">1) Run one of the configure template scripts</font></b>
+
+**NOTE:** do not simply run `./configure` without arguments. If you do, your build will fail due to a known issue (see [[MESOS-103|https://issues.apache.org/jira/browse/MESOS-103]] for more details).
+
+We recommend you use one of the configure.template scripts in the root directory, which will call the more general `configure` script and pass it appropriate arguments. E.g. if you are using OS X, run ./configure.template.macosx.
+
+These configure template scripts contain guesses for the Java and Python paths for distribution indicated in their names (e.g. Mac OS X and several Linux distributions). They assume that you have already installed the packages (i.e. `python-dev` and a JDK). You should double check the configure template script you use (they are just shell scripts, i.e. text files) to make sure the paths it is using for Python and Java match what you have installed. for example, make sure that if you have installed Sun's Java 1.6, your configure template script is not setting JAVA_HOME to be for openjdk.
+
+Advanced users may wish to run `./configure` directly with their own combination of flag options (see [[Mesos Configure Command Flag Options]]).
+
+<b><font size="4">2) Run `make`</font></b>
+
+#### NOTES:
+* If you get errors with `pushd` not working on Ubuntu, this is because /bin/sh is a link to /bin/dash, not /bin/bash. To fix, do: `sudo ln -fs /bin/bash /bin/sh` (this bug has been fixed in [MESOS-50](https://issues.apache.org/jira/browse/MESOS-50), so if you are seeing it, consider upgrading to a newer version of Mesos)
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Powered-by-Mesos.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Powered-by-Mesos.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Powered-by-Mesos.md (added)
+++ incubator/mesos/trunk/docs/Powered-by-Mesos.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,12 @@
+Organizations using Mesos:
+
+* [Twitter](http://www.twitter.com)
+* [Conviva](http://www.conviva.com)
+* [UCSF](http://www.ucsf.edu)
+* [UC Berkeley](http://www.berkeley.edu)
+
+Software projects built on Mesos:
+
+* [Spark](http://www.spark-project.org) cluster computing framework
+
+If you're using Mesos, please add yourself to the list above, or email mesos-dev@incubator.apache.org and we'll add you!
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Running-Hadoop-on-Mesos.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Running-Hadoop-on-Mesos.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Running-Hadoop-on-Mesos.md (added)
+++ incubator/mesos/trunk/docs/Running-Hadoop-on-Mesos.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,33 @@
+We have ported version 0.20.205.0 of Hadoop to run on Mesos. Most of the Mesos port is implemented by a pluggable Hadoop scheduler, which communicates with Mesos to receive nodes to launch tasks on. However, a few small additions to Hadoop's internal APIs are also required.
+
+You can build the ported version of Hadoop using `make hadoop`. It gets placed in the `hadoop/hadoop-0.20.205.0` directory. However, if you want to patch your own version of Hadoop to add Mesos support, you can also use .patch files located in `<Mesos directory>/hadoop`. These patches are likely to work on other Hadoop versions derived from 0.20. For example, for Cloudera's Distribution, GitHub user patelh has already created a [Mesos-compatible version of CDH3u3](https://github.com/patelh/cdh3u3-with-mesos).
+
+To run Hadoop on Mesos, follow these steps:
+<ol>
+<li> Run `make hadoop` to build Hadoop 0.20.205.0 with Mesos support, or `TUTORIAL.sh` to patch and build your own Hadoop version.</li>
+<li> Set up [[Hadoop's configuration|http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/]] as you would usually do with a new install of Hadoop, following the [[instructions on the Hadoop website|http://hadoop.apache.org/common/docs/r0.20.2/index.html]] (at the very least, you need to set <code>JAVA_HOME</code> in Hadoop's <code>conf/hadoop-env.sh</code> and set <code>mapred.job.tracker</code> in <code>conf/mapred-site.xml</code>).</li>
+</li>
+<li> Add the following parameters to Hadoop's <code>conf/mapred-site.xml</code>:
+<pre>
+&lt;property&gt;
+  &lt;name&gt;mapred.jobtracker.taskScheduler&lt;/name&gt;
+  &lt;value&gt;org.apache.hadoop.mapred.MesosScheduler&lt;/value&gt;
+&lt;/property&gt;
+&lt;property&gt;
+  &lt;name&gt;mapred.mesos.master&lt;/name&gt;
+  &lt;value&gt;[URL of Mesos master]&lt;/value&gt;
+&lt;/property&gt;
+</pre>
+</li>
+<li> Launch a JobTracker with <code>bin/hadoop jobtracker</code> (<i>do not</i> use <code>bin/start-mapred.sh</code>). The JobTracker will then launch TaskTrackers on Mesos when jobs are submitted.</li>
+<li> Submit jobs to your JobTracker as usual.</li>
+</ol>
+
+Note that when you run on a cluster, Hadoop (and Mesos) should be located on the same path on all nodes.
+
+If you wish to run multiple JobTrackers, the easiest way is to give each one a different port by using a different Hadoop `conf` directory for each one and passing the `--conf` flag to `bin/hadoop` to specify which config directory to use. You can copy Hadoop's existing `conf` directory to a new location and modify it to achieve this.
+
+## Hadoop Versions with Mesos Support Available
+
+* 0.20.205.0: Included in Mesos (as described above).
+* CDH3u3: [https://github.com/patelh/cdh3u3-with-mesos](https://github.com/patelh/cdh3u3-with-mesos)
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Running-Mesos-On-Mac-OS-X-Snow-Leopard-(Single-Node-Cluster).md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Running-Mesos-On-Mac-OS-X-Snow-Leopard-%28Single-Node-Cluster%29.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Running-Mesos-On-Mac-OS-X-Snow-Leopard-(Single-Node-Cluster).md (added)
+++ incubator/mesos/trunk/docs/Running-Mesos-On-Mac-OS-X-Snow-Leopard-(Single-Node-Cluster).md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,302 @@
+## Running Mesos On Mac OS X (Single Node Cluster)  
+This is step-by-step guide on setting up Mesos on a single node, and running hadoop on top of Mesos.  In this guide, we are assuming Mac OS X 10.6 (Snow Leopard).
+
+## Prerequisites:
+* Java
+    For Mac OS X 10.6 (Snow Leopard):  
+    - Start the Terminal app.  
+    - Create/Edit ~/.bash_profile file.  
+    - `` ~$  vi ~/.bash_profile ``  
+    add the following:  
+    ``export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home``
+    - `` ~$  echo $JAVA_HOME ``  
+    You should see this:  
+    ``/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home``
+
+* git  
+    - Download and install the lastest version of [git](http://git-scm.com/) for Mac OS X
+    - As of June 2011 download [1.7.5.4 - OS X - Leopard - x86_64](http://code.google.com/p/git-osx-installer/downloads/detail?name=git-1.7.5.4-x86_64-leopard.dmg&can=3&q=) 
+
+* Install the latest [Xcode](http://developer.apple.com) (for g++)
+
+## Mesos setup:
+* Downloading Mesos:  
+    `` $  git clone git://git.apache.org/mesos.git ``  
+
+* Building Mesos:  
+    - run `` $ cd mesos``  
+    - run `` $ ./configure.template.macosx``  
+```
+  checking build system type... i386-apple-darwin10.7.0
+	checking host system type... i386-apple-darwin10.7.0
+	checking target system type... i386-apple-darwin10.7.0
+	===========================================================
+	Setting up build environment for i386 darwin10.7.0
+	===========================================================
+	running python2.6 to find compiler flags for creating the Mesos Python library...
+	running python2.6 to find compiler flags for embedding it...
+	checking for g++... g++
+
+	[ ... trimmed ... ]
+```
+    - run `` $  make ``
+```
+make -C third_party/libprocess
+make -C third_party/glog-0.3.1
+/bin/sh ./libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H -I. -I./src  -I./src    -Wall -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare  -DNO_FRAME_POINTER -DNDEBUG -O2 -fno-strict-aliasing -fPIC  -D_XOPEN_SOURCE -MT libglog_la-logging.lo -MD -MP -MF .deps/libglog_la-logging.Tpo -c -o libglog_la-logging.lo `test -f 'src/logging.cc' || echo './'`src/logging.cc
+
+[ ... trimmed ... ]
+```
+
+## Testing the Mesos
+* run ` ~/mesos$ bin/tests/all-tests `
+```
+~/mesos$ bin/tests/all-tests 
+[==========] Running 61 tests from 6 test cases.
+[----------] Global test environment set-up.
+[----------] 18 tests from MasterTest
+[ RUN      ] MasterTest.ResourceOfferWithMultipleSlaves
+[       OK ] MasterTest.ResourceOfferWithMultipleSlaves (33 ms)
+[ RUN      ] MasterTest.ResourcesReofferedAfterReject
+[       OK ] MasterTest.ResourcesReofferedAfterReject (3 ms)
+[ RUN      ] MasterTest.ResourcesReofferedAfterBadResponse
+[       OK ] MasterTest.ResourcesReofferedAfterBadResponse (2 ms)
+[ RUN      ] MasterTest.SlaveLost
+[       OK ] MasterTest.SlaveLost (2 ms)
+[ ... trimmed ... ]
+```
+
+**Congratulation! You have mesos running on your Mac OS X!**
+
+## Setup a small Mesos test cluster on your laptop
+1. Start a master: ` $ bin/mesos-master `
+```
+ ~/mesos/bin$ ./mesos-master
+I0604 15:47:56.499007 1885306016 logging.cpp:40] Logging to /Users/billz/mesos/logs
+I0604 15:47:56.522259 1885306016 main.cpp:75] Build: 2011-06-04 14:44:57 by billz
+I0604 15:47:56.522300 1885306016 main.cpp:76] Starting Mesos master
+I0604 15:47:56.522532 1885306016 webui.cpp:64] Starting master web UI on port 8080
+I0604 15:47:56.522539 7163904 master.cpp:389] Master started at mesos://master@10.1.1.1:5050
+I0604 15:47:56.522676 7163904 master.cpp:404] Master ID: 201106041547-0
+I0604 15:47:56.522743 19939328 webui.cpp:32] Web UI thread started
+
+[ ... trimmed ... ]
+```
+2. Take note of the master URL printed in the output `mesos://master@10.1.1.1:5050`
+3. Start a slave: ` $ bin/mesos-slave --master=mesos://master@10.1.1.1:5050`
+4. View the master's web UI at `http://10.1.1.1:8080` or [localhost:8080](http://localhost:8080) (assuming this computer has IP address = 10.1.1.1).
+5. Run an example framework, we'll use the CPP framework: `$ bin/examples/cpp-test-framework mesos://master@10.1.1.1:5050`
+```
+Registered!
+.Starting task 0 on mac.eecs.berkeley.edu
+Task 0 is in state 1
+Task 0 is in state 2
+.Starting task 1 on mac.eecs.berkeley.edu
+Task 1 is in state 1
+Task 1 is in state 2
+.Starting task 2 on mac.eecs.berkeley.edu
+Task 2 is in state 1
+Task 2 is in state 2
+.Starting task 3 on mac.eecs.berkeley.edu
+Task 3 is in state 1
+Task 3 is in state 2
+.Starting task 4 on mac.eecs.berkeley.edu
+Task 4 is in state 1
+Task 4 is in state 2
+```
+
+## OPTIONAL: Running Hadoop on Mesos [old link](https://github.com/mesos/mesos/wiki/Running-Hadoop-on-Mesos)  
+
+We have ported version 0.20.2 of Hadoop to run on Mesos. Most of the Mesos port is implemented by a pluggable Hadoop scheduler, which communicates with Mesos to receive nodes to launch tasks on. However, a few small additions to Hadoop's internal APIs are also required.
+
+The ported version of Hadoop is included in the Mesos project under `frameworks/hadoop-0.20.2`. However, if you want to patch your own version of Hadoop to add Mesos support, you can also use the patch located at `frameworks/hadoop-0.20.2/hadoop-mesos.patch`. This patch should apply on any 0.20.* version of Hadoop, and is also likely to work on Hadoop distributions derived from 0.20, such as Cloudera's or Yahoo!'s.
+
+Most of the Hadoop setup is derived from [Michael G. Noll' guide](http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/)
+
+To run Hadoop on Mesos, follow these steps:
+
+1. Setting up the environment:  
+    - Create/Edit ~/.bashrc file.  
+    `` ~$  vi ~/.bashrc ``  
+    add the following:  
+```
+    # Set Hadoop-related environment variables. Here the username is billz
+    export HADOOP_HOME=/Users/billz/mesos/frameworks/hadoop-0.20.2/  
+
+    # Add Hadoop bin/ directory to PATH  
+    export PATH=$PATH:$HADOOP_HOME/bin  
+
+    # Set where you installed the mesos. For me is /Users/billz/mesos. billz is my username.  
+    export MESOS_HOME=/Users/billz/mesos/
+```
+    - Go to hadoop directory that come with mesos's directory:  
+    `cd ~/mesos/frameworks/hadoop-0.20.2/conf`  
+    - Edit **hadoop-env.sh** file.  
+    add the following:  
+```
+# The java implementation to use.  Required.
+export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Home
+
+# Mesos uses this.
+export MESOS_HOME=/Users/username/mesos/
+
+# Extra Java runtime options.  Empty by default. This disable IPv6.
+export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
+```
+
+2. Hadoop configuration:  
+    - In **core-site.xml** file add the following:  
+```
+<!-- In: conf/core-site.xml -->
+<property>
+  <name>hadoop.tmp.dir</name>
+  <value>/app/hadoop/tmp</value>
+  <description>A base for other temporary directories.</description>
+</property>
+
+<property>
+  <name>fs.default.name</name>
+  <value>hdfs://localhost:54310</value>
+  <description>The name of the default file system.  A URI whose
+  scheme and authority determine the FileSystem implementation.  The
+  uri's scheme determines the config property (fs.SCHEME.impl) naming
+  the FileSystem implementation class.  The uri's authority is used to
+  determine the host, port, etc. for a filesystem.</description>
+</property>
+```
+    - In **hdfs-site.xml** file add the following:  
+```
+<!-- In: conf/hdfs-site.xml -->
+<property>
+  <name>dfs.replication</name>
+  <value>1</value>
+  <description>Default block replication.
+  The actual number of replications can be specified when the file is created.
+  The default is used if replication is not specified in create time.
+  </description>
+</property>
+```
+    - In **mapred-site.xml** file add the following:  
+```
+<!-- In: conf/mapred-site.xml -->
+<property>
+  <name>mapred.job.tracker</name>
+  <value>localhost:9001</value>
+  <description>The host and port that the MapReduce job tracker runs
+  at.  If "local", then jobs are run in-process as a single map
+  and reduce task.
+  </description>
+</property>
+
+<property>
+  <name>mapred.jobtracker.taskScheduler</name>
+  <value>org.apache.hadoop.mapred.MesosScheduler</value>
+</property>
+<property>
+  <name>mapred.mesos.master</name>
+  <value>mesos://master@10.1.1.1:5050</value> <!-- Here we are assuming your host IP address is 10.1.1.1 -->
+</property>
+
+```
+  
+3. Build the Hadoop-0.20.2 that come with Mesos
+    - Start a new bash shell or reboot the host:  
+    ` ~$ reboot`  
+    - Login as "billz" or any user that you start with this guide
+    - Go to hadoop-0.20.2 directory:  
+    ` ~$ cd ~/mesos/frameworks/hadoop-0.20.2`  
+    - Build Hadoop:  
+    ` ~$ ant `  
+    - Build the Hadoop's Jar files:  
+    ` ~$ ant compile-core jar`  
+    ` ~$ ant examples jar`
+
+4. Setup Hadoop’s Distributed File System **HDFS**:  
+    - create the directory and set the required ownerships and permissions: 
+```
+$ sudo mkdir /app/hadoop/tmp
+$ sudo chown billz:billz /app/hadoop/tmp
+# ...and if you want to tighten up security, chmod from 755 to 750...
+$ sudo chmod 750 /app/hadoop/tmp
+```
+    - formatting the Hadoop filesystem:  
+    `~/mesos/frameworks/hadoop-0.20.2$  bin/hadoop namenode -format`
+
+5. Copy local example data to HDFS  
+    - Download some plain text document from Project Gutenberg  
+    [The Notebooks of Leonardo Da Vinci](http://www.gutenberg.org/ebooks/5000)  
+    [Ulysses by James Joyce](http://www.gutenberg.org/ebooks/4300)  
+    [The Complete Works of William Shakespeare](http://www.gutenberg.org/ebooks/100)  
+    save these to /tmp/gutenberg/ directory.  
+    - Copy files from our local file system to Hadoop’s HDFS  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg ~/gutenberg`  
+    - Check the file(s) in Hadoop's HDFS  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls ~/gutenberg`       
+    - You should see something like the following:
+```
+ ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls ~/gutenberg
+Found 6 items
+-rw-r--r--   1 billz supergroup    5582656 2011-07-14 16:38 /user/billz/gutenberg/pg100.txt
+-rw-r--r--   1 billz supergroup    3322643 2011-07-14 16:38 /user/billz/gutenberg/pg135.txt
+-rw-r--r--   1 billz supergroup    1884720 2011-07-14 16:38 /user/billz/gutenberg/pg14833.txt
+-rw-r--r--   1 billz supergroup    2130906 2011-07-14 16:38 /user/billz/gutenberg/pg18997.txt
+-rw-r--r--   1 billz supergroup    3288707 2011-07-14 16:38 /user/billz/gutenberg/pg2600.txt
+-rw-r--r--   1 billz supergroup    1423801 2011-07-14 16:38 /user/billz/gutenberg/pg5000.txt
+
+```
+
+6. Start all your frameworks!
+    - Start Mesos's Master:      
+    ` ~/mesos$ bin/mesos-master &`  
+    - Start Mesos's Slave:       
+    ` ~/mesos$ bin/mesos-slave --master=mesos://master@localhost:5050 &`  
+    - Start Hadoop's namenode:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop-daemon.sh namenode`  
+    - Start Hadoop's datanode:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop-daemon.sh datanode`  
+    - Start Hadoop's jobtracker:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop-daemon.sh jobtracker`  
+
+7. Run the MapReduce job:  
+   We will now run your first Hadoop MapReduce job. We will use the [WordCount](http://wiki.apache.org/hadoop/WordCount) example job which reads text files and counts how often words occur. The input is text files and the output is text files, each line of which contains a word and the count of how often it occurred, separated by a tab.  
+
+    - Run the "wordcount" example MapReduce job:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jar build/hadoop-0.20.3-dev-examples.jar wordcount ~/gutenberg ~/output`  
+    - You will see something like the following:  
+```
+11/07/19 15:34:29 INFO input.FileInputFormat: Total input paths to process : 6
+11/07/19 15:34:29 INFO mapred.JobClient: Running job: job_201107191533_0001
+11/07/19 15:34:30 INFO mapred.JobClient:  map 0% reduce 0%
+11/07/19 15:34:43 INFO mapred.JobClient:  map 16% reduce 0%
+
+[ ... trimmed ... ]
+```
+
+8. Web UI for Hadoop and Mesos:   
+    - [http://localhost:50030](http://localhost:50030) - web UI for MapReduce job tracker(s)  
+    - [http://localhost:50060](http://localhost:50060) - web UI for task tracker(s)  
+    - [http://localhost:50070](http://localhost:50070) - web UI for HDFS name node(s)  
+    - [http://localhost:8080](http://localhost:8080) - web UI for Mesos master  
+
+
+9. Retrieve the job result from HDFS:
+   - list the HDFS directory:
+```
+~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls ~/gutenberg
+Found 2 items
+drwxr-xr-x   - billz supergroup          0 2011-07-14 16:38 /user/billz/gutenberg
+drwxr-xr-x   - billz supergroup          0 2011-07-19 15:35 /user/billz/output
+```
+   - View the output file:  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -cat ~/output/part-r-00000`
+
+### Congratulation! You have Hadoop running on mesos, and mesos running on Mac OS X!
+
+
+## Need more help?   
+* [Use our mailing lists](http://incubator.apache.org/projects/mesos.html)
+* mesos-dev@incubator.apache.org
+
+## Want to contribute?
+* Check out the [[Mesos Developers Guide]]
+* [Submit a Bug](https://issues.apache.org/jira/browse/MESOS)
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Running-Mesos-on-Ubuntu-Linux-(Single-Node-Cluster).md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Running-Mesos-on-Ubuntu-Linux-%28Single-Node-Cluster%29.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Running-Mesos-on-Ubuntu-Linux-(Single-Node-Cluster).md (added)
+++ incubator/mesos/trunk/docs/Running-Mesos-on-Ubuntu-Linux-(Single-Node-Cluster).md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,284 @@
+## Running Mesos On Ubuntu Linux (Single Node Cluster)
+This is step-by-step guide on setting up Mesos on a single node, and (optionally) running hadoop on top of Mesos. Here we are assuming Ubuntu 10.04 LTS - Long-term support 64-bit (Lucid Lynx).  Plus, we are using username "hadoop" and password "hadoop" for this guide.  
+
+## Prerequisites:
+* Java
+    For Ubuntu 10.04 LTS (Lucid Lynx):  
+    - Go to Applications > Accessories > Terminal.
+    - Create/Edit ~/.bashrc file.  
+    `` ~$  vi ~/.bashrc ``  
+    add the following:  
+    ``export JAVA_HOME=/usr/lib/jvm/java-6-sun``
+    - `` ~$  echo $JAVA_HOME ``  
+    You should see this:  
+    ``/usr/lib/jvm/java-6-sun``
+    - Add the Canonical Partner Repository to your apt repositories:    
+    `~$ sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"`    
+    Or edit `~$  vi  /etc/apt/sources.list`
+
+    - Update the source list   
+    `sudo apt-get update`  
+    - Install Java (we'll use Sun Java 1.6 for this tutorial, but you can OpenJDK if you want to)
+
+    `~$ sudo apt-get install build-essential sun-java6-jdk sun-java6-plugin`    
+    `~$ sudo update-java-alternatives -s java-6-sun`  
+
+* git  
+    - `~$ sudo apt-get -y install git-core gitosis`  
+    - As of June 2011 download [Git release is v1.7.5.4]
+
+* Python and ssh
+
+    - run `` ~$  sudo apt-get install python-dev ``
+    - run `` ~$  sudo apt-get install openssh-server openssh-client ``
+
+* OPTIONAL: If you want to run Hadoop, see [Hadoop setup](http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/)
+
+## Mesos setup:
+* Follow the general setup instructions at [Home](Home)
+
+**Congratulation! You have mesos running on your Ubuntu Linux!**
+
+## Simulating a Mesos cluster on one machine
+1. Start a Mesos master: ` ~/mesos$ bin/mesos-master `
+<pre>
+ ~/mesos/bin$ ./mesos-master
+I0604 15:47:56.499007 1885306016 logging.cpp:40] Logging to /Users/billz/mesos/logs
+I0604 15:47:56.522259 1885306016 main.cpp:75] Build: 2011-06-04 14:44:57 by billz
+I0604 15:47:56.522300 1885306016 main.cpp:76] Starting Mesos master
+I0604 15:47:56.522532 1885306016 webui.cpp:64] Starting master web UI on port 8080
+I0604 15:47:56.522539 7163904 master.cpp:389] Master started at mesos://master@10.1.1.1:5050
+I0604 15:47:56.522676 7163904 master.cpp:404] Master ID: 201106041547-0
+I0604 15:47:56.522743 19939328 webui.cpp:32] Web UI thread started
+... trimmed ...
+</pre>
+
+2. Take note of the master URL `mesos://master@10.1.1.1:5050`
+
+3. Start a Mesos slave: ` ~/mesos$ bin/mesos-slave --master=mesos://master@10.1.1.1:5050`
+
+4. View the master's web UI at `http://10.1.1.1:8080` (here assuming this computer has IP address = 10.1.1.1).
+
+5. Run the test framework: `~/mesos$ bin/examples/cpp-test-framework mesos://master@10.1.1.1:5050`
+
+<pre>
+Registered!
+.Starting task 0 on ubuntu.eecs.berkeley.edu
+Task 0 is in state 1
+Task 0 is in state 2
+.Starting task 1 on ubuntu.eecs.berkeley.edu
+Task 1 is in state 1
+Task 1 is in state 2
+.Starting task 2 on ubuntu.eecs.berkeley.edu
+Task 2 is in state 1
+Task 2 is in state 2
+.Starting task 3 on ubuntu.eecs.berkeley.edu
+Task 3 is in state 1
+Task 3 is in state 2
+.Starting task 4 on ubuntu.eecs.berkeley.edu
+Task 4 is in state 1
+Task 4 is in state 2
+</pre>
+
+## Running Hadoop on Mesos [old link](https://github.com/mesos/mesos/wiki/Running-Hadoop-on-Mesos)  
+
+We have ported version 0.20.2 of Hadoop to run on Mesos. Most of the Mesos port is implemented by a pluggable Hadoop scheduler, which communicates with Mesos to receive nodes to launch tasks on. However, a few small additions to Hadoop's internal APIs are also required.
+
+The ported version of Hadoop is included in the Mesos project under `frameworks/hadoop-0.20.2`. However, if you want to patch your own version of Hadoop to add Mesos support, you can also use the patch located at `frameworks/hadoop-0.20.2/hadoop-mesos.patch`. This patch should apply on any 0.20.* version of Hadoop, and is also likely to work on Hadoop distributions derived from 0.20, such as Cloudera's or Yahoo!'s.
+
+Most of the Hadoop setup is derived from [Michael G. Noll' guide](http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/)
+
+To run Hadoop on Mesos, follow these steps:
+
+1. Setting up the environment:  
+   * Create/Edit ~/.bashrc file.  
+   `` ~$  vi ~/.bashrc ``  
+   add the following:  
+
+```
+    # Set Hadoop-related environment variables  
+    export HADOOP_HOME=/usr/local/hadoop  
+
+    # Add Hadoop bin/ directory to PATH  
+    export PATH=$PATH:$HADOOP_HOME/bin  
+
+    # Set where you installed the mesos. For me is /home/hadoop/mesos. hadoop is my username.  
+    export MESOS_HOME=/home/hadoop/mesos
+```
+   * Go to hadoop directory that come with mesos's directory:  
+   `cd ~/mesos/frameworks/hadoop-0.20.2/conf`  
+   * Edit **hadoop-env.sh** file.  
+   add the following:  
+
+```
+# The java implementation to use.  Required.
+export JAVA_HOME=/usr/lib/jvm/java-6-sun
+
+# The mesos use this.
+export MESOS_HOME=/home/hadoop/mesos
+
+# Extra Java runtime options.  Empty by default. This disable IPv6.
+export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
+```
+
+2. Hadoop configuration:  
+   * In **core-site.xml** file add the following:  
+
+```
+<!-- In: conf/core-site.xml -->
+<property>
+  <name>hadoop.tmp.dir</name>
+  <value>/app/hadoop/tmp</value>
+  <description>A base for other temporary directories.</description>
+</property>
+
+<property>
+  <name>fs.default.name</name>
+  <value>hdfs://localhost:54310</value>
+  <description>The name of the default file system.  A URI whose
+  scheme and authority determine the FileSystem implementation.  The
+  uri's scheme determines the config property (fs.SCHEME.impl) naming
+  the FileSystem implementation class.  The uri's authority is used to
+  determine the host, port, etc. for a filesystem.</description>
+</property>
+```
+   * In **hdfs-site.xml** file add the following:  
+
+```
+<!-- In: conf/hdfs-site.xml -->
+<property>
+  <name>dfs.replication</name>
+  <value>1</value>
+  <description>Default block replication.
+  The actual number of replications can be specified when the file is created.
+  The default is used if replication is not specified in create time.
+  </description>
+</property>
+```
+   * In **mapred-site.xml** file add the following:  
+
+```
+<!-- In: conf/mapred-site.xml -->
+<property>
+  <name>mapred.job.tracker</name>
+  <value>localhost:9001</value>
+  <description>The host and port that the MapReduce job tracker runs
+  at.  If "local", then jobs are run in-process as a single map
+  and reduce task.
+  </description>
+</property>
+
+<property>
+  <name>mapred.jobtracker.taskScheduler</name>
+  <value>org.apache.hadoop.mapred.MesosScheduler</value>
+</property>
+<property>
+  <name>mapred.mesos.master</name>
+  <value>mesos://master@10.1.1.1:5050</value> <!-- Here we are assuming your host IP address is 10.1.1.1 -->
+</property>
+
+```
+  
+3. Build the Hadoop-0.20.2 that come with Mesos
+    - Start a new bash shell or reboot the host:  
+    ` ~$ sudo shutdown -r now`  
+    - Login as "hadoop" or any user that you start with this guide
+    - Go to hadoop-0.20.2 directory:  
+    ` ~$ cd ~/mesos/frameworks/hadoop-0.20.2`  
+    - Build Hadoop:  
+    ` ~$ ant `  
+    - Build the Hadoop's Jar files:  
+    ` ~$ ant compile-core jar`   
+    ` ~$ ant examples jar`  
+
+4. Setup Hadoop’s Distributed File System **HDFS**:  
+    - create the directory and set the required ownerships and permissions: 
+
+```
+$ sudo mkdir /app/hadoop/tmp
+$ sudo chown hadoop:hadoop /app/hadoop/tmp
+# ...and if you want to tighten up security, chmod from 755 to 750...
+$ sudo chmod 750 /app/hadoop/tmp
+```
+    - formatting the Hadoop filesystem:  
+    `~/mesos/frameworks/hadoop-0.20.2$  bin/hadoop namenode -format`
+    
+
+5. Copy local example data to HDFS  
+    - Download some plain text document from Project Gutenberg  
+    [The Notebooks of Leonardo Da Vinci](http://www.gutenberg.org/ebooks/5000)  
+    [Ulysses by James Joyce](http://www.gutenberg.org/ebooks/4300)  
+    [The Complete Works of William Shakespeare](http://www.gutenberg.org/ebooks/100)  
+    save these to /tmp/gutenberg/ directory.  
+    - Copy files from our local file system to Hadoop’s HDFS  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -copyFromLocal /tmp/gutenberg /user/hadoop/gutenberg`  
+    - Check the file(s) in Hadoop's HDFS  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls /user/hadoop/gutenberg`       
+    - You should see something like the following:
+
+```
+ ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls /user/hadoop/gutenberg
+Found 6 items
+-rw-r--r--   1 hadoop supergroup    5582656 2011-07-14 16:38 /user/hadoop/gutenberg/pg100.txt
+-rw-r--r--   1 hadoop supergroup    3322643 2011-07-14 16:38 /user/hadoop/gutenberg/pg135.txt
+-rw-r--r--   1 hadoop supergroup    1423801 2011-07-14 16:38 /user/hadoop/gutenberg/pg5000.txt 
+```
+
+6. Start all your frameworks!
+    - Start Mesos's Master:      
+    ` ~/mesos$ bin/mesos-master &`  
+    - Start Mesos's Slave:       
+    ` ~/mesos$ bin/mesos-slave --url=mesos://master@localhost:5050 &`  
+    - Start Hadoop's namenode:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop-daemon.sh namenode`  
+    - Start Hadoop's datanode:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop-daemon.sh datanode`  
+    - Start Hadoop's jobtracker:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop-daemon.sh jobtracker`  
+
+    - Note: There may be some intermediate issue with dfs directory. Try deleting the /app/hadoop/tmp and /tmp/hadoop*.  Then do a hadoop namenode -format.
+
+7. Run the MapReduce job:  
+   We will now run your first Hadoop MapReduce job. We will use the [WordCount](http://wiki.apache.org/hadoop/WordCount) example job which reads text files and counts how often words occur. The input is text files and the output is text files, each line of which contains a word and the count of how often it occurred, separated by a tab.  
+
+    - Run the "wordcount" example MapReduce job:  
+    ` ~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jar build/hadoop-0.20.3-dev-examples.jar wordcount /user/hadoop/gutenberg /user/hadoop/output`  
+    - You will see something like the following:  
+
+```
+11/07/19 15:34:29 INFO input.FileInputFormat: Total input paths to process : 6
+11/07/19 15:34:29 INFO mapred.JobClient: Running job: job_201107191533_0001
+11/07/19 15:34:30 INFO mapred.JobClient:  map 0% reduce 0%
+11/07/19 15:34:43 INFO mapred.JobClient:  map 16% reduce 0%
+
+[ ... trimmed ... ]
+```
+
+8. Web UI for Hadoop and Mesos:   
+    - [http://localhost:50030](http://localhost:50030) - web UI for MapReduce job tracker(s)  
+    - [http://localhost:50060](http://localhost:50060) - web UI for task tracker(s)  
+    - [http://localhost:50070](http://localhost:50070) - web UI for HDFS name node(s)  
+    - [http://localhost:8080](http://localhost:8080) - web UI for Mesos master  
+
+9. Retrieve the job result from HDFS:
+   - list the HDFS directory:
+
+```
+~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -ls /user/billz/gutenberg
+Found 2 items
+drwxr-xr-x   - hadoop supergroup          0 2011-07-14 16:38 /user/hadoop/gutenberg
+drwxr-xr-x   - hadoop supergroup          0 2011-07-19 15:35 /user/hadoop/output
+```
+   - View the output file:  
+    `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop dfs -cat /user/hadoop/output/part-r-00000`
+
+### Congratulation! You have Hadoop running on mesos, and mesos running on your Ubuntu Linux!
+
+
+## Need more help?   
+* [Use our mailing lists](http://incubator.apache.org/projects/mesos.html)
+* mesos-dev@incubator.apache.org
+
+## Wants to contribute?
+* [Contributing to Open Source Projects HOWTO](http://www.kegel.com/academy/opensource.html)
+* [Submit a Bug](https://issues.apache.org/jira/browse/MESOS)
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Running-a-Second-Instance-of-Hadoop-(Snow-Leopard).md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Running-a-Second-Instance-of-Hadoop-%28Snow-Leopard%29.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Running-a-Second-Instance-of-Hadoop-(Snow-Leopard).md (added)
+++ incubator/mesos/trunk/docs/Running-a-Second-Instance-of-Hadoop-(Snow-Leopard).md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,26 @@
+1. First, follow the instructions [here](https://github.com/mesos/mesos/wiki/Running-Mesos-On-Mac-OS-X-Snow-Leopard-(Single-Node-Cluster\)) to get a single instance of Hadoop running.
+
+2. Next, we'll get a second instance of the same version of Hadoop that ships with Mesos running:
+    - Make another copy of Hadoop:      
+      `~/mesos$ cp -R frameworks/hadoop-0.20.2 ~/hadoop`
+    - Modify necessary ports:     
+        * In conf/mapred-site.xml.template, change the mapred.job.tracker port from 9001 to 9002.
+        * In src/mapred/mapred-default.xml, change the mapred.task.tracker.http.address port and the      
+          mapred.job.tracker.http.address port both to 0.
+    - Build Hadoop:      
+      `~/hadoop$ ant`      
+      `~/hadoop$ ant compile-core jar`      
+      `~/hadoop$ ant examples jar`   
+    - Start up Mesos:      
+      `~/mesos$ bin/mesos-master`      
+      `~/mesos$ bin/mesos-slave --master=mesos://master@localhost:5050`      
+    - Start up HDFS (we'll have one instance of HDFS that both instances of Hadoop access):      
+      `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop namenode`      
+      `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop datanode`
+    - Start the jobtrackers:      
+      `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jobtracker`      
+      `~/hadoop$ bin/hadoop jobtracker`
+    - Run some tests:      
+      `~/mesos/frameworks/hadoop-0.20.2$ bin/hadoop jar build/hadoop-0.20.3-dev-examples.jar wordcount ~/gutenburg/ ~/output`      
+      `~/hadoop$ bin/hadoop jar build/hadoop-0.20.3-dev-examples.jar wordcount ~/gutenburg/ ~/output1`
+    - At this point, you should be able to run both instances of wordcount at the same time.
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Running-a-web-application-farm-on-mesos.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Running-a-web-application-farm-on-mesos.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Running-a-web-application-farm-on-mesos.textile (added)
+++ incubator/mesos/trunk/docs/Running-a-web-application-farm-on-mesos.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,21 @@
+h1. Running a web-farm on Mesos in your private cluster
+
+Mesos comes with a framework written in Python that will run a web load balancer ([[HAProxy|http://haproxy.1wt.eu/]]) and use apache for the backend servers. The framework is in MESOS_HOME/frameworks/haproxy+apache. There is a README in that directory that might be slightly helpful, but we recommend looking a the framework scheduler and executor code to familiarize yourself with the concepts behind the framework.
+
+HAProxy has this annoying artifact that in order to ramp up and down in the number of servers it is load balancing between it has to kill the existing running HAProxy process and start a new one, which inevitably leads to dropped connections. Some people have looked into ways to get around the dropped connections (see this animoto blog post about adding/removing servers to haproxy: [[http://labs.animoto.com/2010/03/10/uptime-with-haproxy/]]).
+
+We recommend checking out other load balancers as well. Basically, whatever load balancer you're using, the framework scheduler should query the statistics of the load balancer to figure out how many web requests are being served to it and then ramp up and down the number of web server tasks based on those statistics. We've also written a framework that uses [[Linux Virtual Server (LVS)|http://www.linux-vs.org/]] as the load balancer. LVS can be run in three different modes, and any should work with Mesos.
+
+h1. Running a web farm on Mesos on EC2
+
+h2. Why use Mesos on EC2?
+
+Many might wonder what the advantages of running a Mesos web farm on EC2 are over simply using the Amazon Elastic Load Balancer which will automatically ramp up and down the number of back end web server AMIs for you. We have found that by running Mesos on top of EC2 we are able to share a single EC2 instance between our web frameworks and our other frameworks such as Hadoop or Spark which can be used to do data analytics on top of the click logs being generated by the web frameworks, for instance.
+
+We have also found that we can allow multiple different users share an EC2 allocation of instances much more efficiently (and thus reduce EC2 costs) by using Mesos on that EC2 allocation.
+
+h2. Where to look to get started
+
+We have also successfully used Amazon's Elastic Load Balancer instead of HAProxy. This is still in a non-master development branch called [[andyk-elb+apache-notorque|https://github.com/mesos/mesos/tree/andyk-elb+apache-notorque]]. You will need to set up a Mesos EC2 cluster and the framework uses python boto to connect to Amazon's CloudWatch and EC2 so you will need to create a credentials file for boto to be able to connect to those amazon services correctly from the node on which you run the framework.
+
+Watch out when using ELB however, since it displays some interesting and non-intuitive behavior when ramping up and down (though it doesn't drop connections like HAProxy does out of the box). See this blog post: http://shlomoswidler.com/2009/07/elastic-in-elastic-load-balancing-elb.html for details about how ELB ramps up and down and especially read it if you plan on. 

Added: incubator/mesos/trunk/docs/Running-torque-or-mpi-on-mesos.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Running-torque-or-mpi-on-mesos.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Running-torque-or-mpi-on-mesos.md (added)
+++ incubator/mesos/trunk/docs/Running-torque-or-mpi-on-mesos.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,3 @@
+Torque is cluster scheduler that we have ported in a very simple way (via some Python wrappers around Torque) to run on top of Mesos. This port can be found in @MESOS_HOME/frameworks/torque@. There is a README.txt in that directory with some details. But for now while were in Alpha looking directly at the python source for the framework scheduler and executor will be helpful.
+
+We have also run MPICH2 directly on top of mesos. See @MESOS_HOME/frameworks/mpi@ for this port. Basically it sets up the MPICH2 MPD ring for you when you use nmpiexec (which is named that way because Mesos used to be called <b>N</b>exus).
\ No newline at end of file

Added: incubator/mesos/trunk/docs/Using-Linux-Containers.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Using-Linux-Containers.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Using-Linux-Containers.textile (added)
+++ incubator/mesos/trunk/docs/Using-Linux-Containers.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,19 @@
+"Linux Containers":http://lxc.sourceforge.net/ are a new feature of the Linux kernel (available since version 2.6.29) that enable resource and access isolation between process groups. One can launch a process in a container and control the CPU, memory, network, etc usage of the process and all its descendants, as well as limit the container's access to the file system or even give it a separate file system similar to @chroot@. Containers thus act as lightweight virtual machines (without the overhead of running a separate kernel for each container), similar to Solaris Zones.
+
+Mesos can use Linux Containers to isolate frameworks running on Linux nodes, ensuring that applications cannot use more CPU time, memory, etc than requested. You can use this feature through the following steps:
+* Install the Linux container userspace tools (e.g., on Ubuntu, @apt-get install lxc@) and ensure that you are running a version of the kernel with containers enabled using @lxc-checkconfig@. In particular, for memory isolation, this should report that the memory subsystem is enabled.
+* Mount the Linux container virtual filesystem, for example by creating the directory @/cgroup@ and using @mount -t cgroup cgroup /cgroup@. ("This guide":http://lxc.teegra.net/ is a good tutorial on Linux containers).
+* Make sure that you are running @mesos-slave@ as root so that it can use the container tools.
+* Pass the argument @--isolation=lxc@ to @mesos-slave@ (or set @isolation@ to @lxc@ in the [[config file|Configuration]]).
+
+The Linux container isolation module will perform some sanity checks upon launching to make sure that it can use containers, and will then attempt to use them for frameworks. The containers will be named @mesos.slave-X.framework-Y@, where X and Y are the slave and framework IDs.
+
+The current implementation enforces CPU and memory share limits on containers. CPUs are shared in a weighted manner between frameworks (so that if some frameworks aren't using their shares, their CPUs can do useful work for other frameworks). Memory is controlled by placing a limit on the RSS of each framework. Note that when a framework's memory share goes down (e.g. because tasks finish), the isolation module also tries to scale down its RSS by swapping out memory. If this is unsuccessful, the framework's executor is killed.
+
+h1. Unit Tests
+
+There are unit tests for the Linux Container isolation module. They are executed by default on any Linux-based operating system. You must be running @make check@ as root (since they actually invoke container commands!) or they'll fail. It is probably most convenient to run these tests in a VM.
+
+h1. Limitations and Roadmap
+
+The current Linux container isolation support in Mesos is limited to only CPU and memory, and is rather aggressive about scaling down frameworks' memory size when they finish tasks. In the future, we plan to extend the support to cover other resources that can be isolated by containers (e.g. network and disk I/O bandwidth), and to make memory management more lenient (so that a framework can temporarily stay over its share if there is enough free memory in the system, and has more time to "scale down").

Added: incubator/mesos/trunk/docs/Using-ZooKeeper.textile
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Using-ZooKeeper.textile?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Using-ZooKeeper.textile (added)
+++ incubator/mesos/trunk/docs/Using-ZooKeeper.textile Sun Mar  3 19:37:40 2013
@@ -0,0 +1,5 @@
+Mesos can run in fault-tolerant mode, in which multiple Mesos masters run simultaneously, with one of them being the active master, and the others acting as stand-bys ready to take over if the active master fails. Mesos uses Apache ZooKeeper in to elect a new active master. 
+
+Fault-tolerant mode requires Mesos to be built with ZooKeeper. This can be done with the configure option @--with-included-zookeeper@, which will ensure that ZooKeeper (which resides in the @third_party@ directory) gets compiled. It is also possible to run Mesos with an external ZooKeeper by using the configure option @--with-zookeeper=DIR@, setting @DIR@ to the directory of the external ZooKeeper.
+
+To run Mesos in fault-tolerant mode, ZooKeeper has to be up and running. The script @third_party/zookeeper-*/bin/zkServer.sh@ can be used to launch ZooKeeper (see the ZooKeeper documentation for more information). Once ZooKeeper is running, the master daemon, slave daemon(s), and the framework schedulers have to be passed a URL to the running ZooKeeper instance. The URL is of the form @zoo://host1:port1,host2:port2/znode@, where the @host:port@ pairs are ZooKeeper servers and @znode@ is a path to a znode (ZooKeeper's equivalent of a directory) for use by Mesos. It is also possible to use the URL @zoofile://filename/znode@, in which case @filename@ should contain one @host:port@ pair per line. This URL replaces the Mesos master URL (i.e. @mesos://@) which is passed when Mesos is not running in fault-tolerant mode. Multiple Mesos masters can be executed this way. Mesos will ensure, through ZooKeeper, that only one of them is the active master at any given time.

Added: incubator/mesos/trunk/docs/Using-the-mesos-submit-tool.md
URL: http://svn.apache.org/viewvc/incubator/mesos/trunk/docs/Using-the-mesos-submit-tool.md?rev=1452107&view=auto
==============================================================================
--- incubator/mesos/trunk/docs/Using-the-mesos-submit-tool.md (added)
+++ incubator/mesos/trunk/docs/Using-the-mesos-submit-tool.md Sun Mar  3 19:37:40 2013
@@ -0,0 +1,4 @@
+Sometimes you just want to run a command or launch a binary on all (or a subset) of the nodes in your cluster. The `mesos-submit` "framework" lets you do just that!
+
+Mesos-submit is a little framework that lets you run a binary in the Mesos cluster without having to keep a scheduler running on the machine you submitted it from. You call mesos-submit <binary> and the script will launch a framework with a single task. This task then takes over as the scheduler for this framework (using the scheduler failover feature), and the mesos-submit process (which was the initial scheduler) can safely exit. The task then goes on to run the command. This is useful for people who want to submit their schedulers to the cluster for example.
+