You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by ka...@apache.org on 2014/01/02 01:14:08 UTC

[08/31] git commit: Redesign documentation for 0.6.2, 0.7.0, and trunk

Redesign documentation for 0.6.2, 0.7.0, and trunk


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/4a4510d1
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/4a4510d1
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/4a4510d1

Branch: refs/heads/helix-website
Commit: 4a4510d1203246a64a5989d2c31247775bb3ebca
Parents: 92edaab
Author: Kanak Biscuitwala <ka...@hotmail.com>
Authored: Wed Jan 1 14:45:38 2014 -0800
Committer: Kanak Biscuitwala <ka...@hotmail.com>
Committed: Wed Jan 1 14:45:38 2014 -0800

----------------------------------------------------------------------
 pom.xml                                         |   2 +
 .../src/site/markdown/Quickstart.md             |   2 +
 .../src/site/markdown/Tutorial.md               |   2 +-
 .../src/site/markdown/tutorial_spectator.md     |   2 +-
 .../site/resources/images/HELIX-components.png  | Bin 82112 -> 0 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 24919 -> 0 bytes
 .../resources/images/helix-architecture.png     | Bin 282390 -> 0 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 13659 -> 0 bytes
 .../resources/images/helix-znode-layout.png     | Bin 53074 -> 0 bytes
 .../src/site/resources/images/statemachine.png  | Bin 41641 -> 0 bytes
 .../src/site/resources/images/system.png        | Bin 79791 -> 0 bytes
 .../0.6.2-incubating/src/site/apt/releasing.apt | 107 -------
 .../src/site/markdown/Architecture.md           | 252 ----------------
 .../src/site/markdown/Building.md               |  12 +-
 .../src/site/markdown/Concepts.md               | 275 ------------------
 .../src/site/markdown/Quickstart.md             | 245 +++++++++-------
 .../src/site/markdown/Tutorial.md               | 158 +++++-----
 .../0.6.2-incubating/src/site/markdown/index.md |  21 +-
 .../src/site/markdown/recipes/lock_manager.md   | 121 ++++----
 .../markdown/recipes/rabbitmq_consumer_group.md | 202 ++++++-------
 .../recipes/rsync_replicated_file_store.md      | 119 ++++----
 .../site/markdown/recipes/service_discovery.md  |  93 +++---
 .../site/markdown/recipes/task_dag_execution.md |  53 ++--
 .../src/site/markdown/tutorial_admin.md         | 285 ++++++++++---------
 .../src/site/markdown/tutorial_controller.md    |  83 +++++-
 .../src/site/markdown/tutorial_health.md        |   8 +-
 .../src/site/markdown/tutorial_messaging.md     |  69 +++--
 .../src/site/markdown/tutorial_participant.md   | 105 ++++---
 .../src/site/markdown/tutorial_propstore.md     |   6 +-
 .../src/site/markdown/tutorial_rebalance.md     |  22 +-
 .../src/site/markdown/tutorial_spectator.md     |  37 ++-
 .../src/site/markdown/tutorial_state.md         |  34 +--
 .../src/site/markdown/tutorial_throttling.md    |   7 +-
 .../markdown/tutorial_user_def_rebalancer.md    |   6 +-
 .../src/site/markdown/tutorial_yaml.md          |   4 +-
 .../site/resources/images/HELIX-components.png  | Bin 82112 -> 0 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 24919 -> 0 bytes
 .../resources/images/helix-architecture.png     | Bin 282390 -> 0 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 13659 -> 0 bytes
 .../resources/images/helix-znode-layout.png     | Bin 53074 -> 0 bytes
 .../src/site/resources/images/statemachine.png  | Bin 41641 -> 0 bytes
 .../src/site/resources/images/system.png        | Bin 79791 -> 0 bytes
 .../0.6.2-incubating/src/site/site.xml          |  57 ++--
 .../src/site/xdoc/download.xml.vm               |   2 +-
 .../0.7.0-incubating/src/site/apt/releasing.apt | 107 -------
 .../src/site/markdown/Architecture.md           | 252 ----------------
 .../src/site/markdown/Building.md               |  12 +-
 .../src/site/markdown/Concepts.md               | 275 ------------------
 .../src/site/markdown/Quickstart.md             | 248 +++++++++-------
 .../src/site/markdown/Tutorial.md               |  24 +-
 .../src/site/markdown/UseCases.md               | 113 --------
 .../0.7.0-incubating/src/site/markdown/index.md |  24 +-
 .../src/site/markdown/recipes/lock_manager.md   | 121 ++++----
 .../markdown/recipes/rabbitmq_consumer_group.md | 202 ++++++-------
 .../recipes/rsync_replicated_file_store.md      | 119 ++++----
 .../site/markdown/recipes/service_discovery.md  |  93 +++---
 .../site/markdown/recipes/task_dag_execution.md |  53 ++--
 .../markdown/recipes/user_def_rebalancer.md     |  29 +-
 .../src/site/markdown/tutorial_accessors.md     |   4 +-
 .../src/site/markdown/tutorial_admin.md         | 285 ++++++++++---------
 .../src/site/markdown/tutorial_controller.md    |  15 +-
 .../src/site/markdown/tutorial_health.md        |   8 +-
 .../src/site/markdown/tutorial_messaging.md     |  69 +++--
 .../src/site/markdown/tutorial_participant.md   |  65 +++--
 .../src/site/markdown/tutorial_propstore.md     |   6 +-
 .../src/site/markdown/tutorial_rebalance.md     |  22 +-
 .../src/site/markdown/tutorial_spectator.md     |  37 ++-
 .../src/site/markdown/tutorial_state.md         |  32 ++-
 .../src/site/markdown/tutorial_throttling.md    |   7 +-
 .../markdown/tutorial_user_def_rebalancer.md    |   4 +-
 .../src/site/markdown/tutorial_yaml.md          |   4 +-
 .../site/resources/images/HELIX-components.png  | Bin 82112 -> 0 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 24919 -> 0 bytes
 .../resources/images/helix-architecture.png     | Bin 282390 -> 0 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 13659 -> 0 bytes
 .../resources/images/helix-znode-layout.png     | Bin 53074 -> 0 bytes
 .../src/site/resources/images/statemachine.png  | Bin 41641 -> 0 bytes
 .../src/site/resources/images/system.png        | Bin 79791 -> 0 bytes
 .../0.7.0-incubating/src/site/site.xml          |  53 ++--
 .../src/site/xdoc/download.xml.vm               |   2 +-
 site-releases/trunk/src/site/apt/releasing.apt  | 107 -------
 .../trunk/src/site/markdown/Architecture.md     | 252 ----------------
 .../trunk/src/site/markdown/Building.md         |   2 +
 .../trunk/src/site/markdown/Concepts.md         | 275 ------------------
 .../trunk/src/site/markdown/Quickstart.md       | 242 +++++++++-------
 .../trunk/src/site/markdown/Tutorial.md         |  54 ++--
 .../trunk/src/site/markdown/UseCases.md         | 113 --------
 site-releases/trunk/src/site/markdown/index.md  |  17 +-
 .../src/site/markdown/recipes/lock_manager.md   | 120 ++++----
 .../markdown/recipes/rabbitmq_consumer_group.md | 201 ++++++-------
 .../recipes/rsync_replicated_file_store.md      | 118 ++++----
 .../site/markdown/recipes/service_discovery.md  |  92 +++---
 .../site/markdown/recipes/task_dag_execution.md |  52 ++--
 .../markdown/recipes/user_def_rebalancer.md     |  55 ++--
 .../src/site/markdown/tutorial_accessors.md     |  26 +-
 .../trunk/src/site/markdown/tutorial_admin.md   | 283 +++++++++---------
 .../src/site/markdown/tutorial_controller.md    |  15 +-
 .../trunk/src/site/markdown/tutorial_health.md  |   8 +-
 .../src/site/markdown/tutorial_messaging.md     |  69 +++--
 .../src/site/markdown/tutorial_participant.md   |  65 +++--
 .../src/site/markdown/tutorial_propstore.md     |   6 +-
 .../src/site/markdown/tutorial_rebalance.md     |  24 +-
 .../src/site/markdown/tutorial_spectator.md     |  37 ++-
 .../trunk/src/site/markdown/tutorial_state.md   |  32 ++-
 .../src/site/markdown/tutorial_throttling.md    |   7 +-
 .../markdown/tutorial_user_def_rebalancer.md    |  44 +--
 .../trunk/src/site/markdown/tutorial_yaml.md    |   4 +-
 .../site/resources/images/HELIX-components.png  | Bin 82112 -> 0 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 24919 -> 0 bytes
 .../resources/images/helix-architecture.png     | Bin 282390 -> 0 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 13659 -> 0 bytes
 .../resources/images/helix-znode-layout.png     | Bin 53074 -> 0 bytes
 .../src/site/resources/images/statemachine.png  | Bin 41641 -> 0 bytes
 .../trunk/src/site/resources/images/system.png  | Bin 79791 -> 0 bytes
 site-releases/trunk/src/site/site.xml           |  49 +++-
 .../trunk/src/site/xdoc/download.xml.vm         |  24 +-
 src/site/markdown/index.md                      |   2 +-
 117 files changed, 2517 insertions(+), 4554 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index bce0c4c..7535ea4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -505,6 +505,8 @@ under the License.
             <ignorePathsToDelete>
               <ignorePathToDelete>javadocs</ignorePathToDelete>
               <ignorePathToDelete>javadocs**</ignorePathToDelete>
+              <ignorePathToDelete>apidocs</ignorePathToDelete>
+              <ignorePathToDelete>apidocs**</ignorePathToDelete>
             </ignorePathsToDelete>
           </configuration>
           <dependencies>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/Quickstart.md b/site-releases/0.6.1-incubating/src/site/markdown/Quickstart.md
index 73d7422..6cdd864 100644
--- a/site-releases/0.6.1-incubating/src/site/markdown/Quickstart.md
+++ b/site-releases/0.6.1-incubating/src/site/markdown/Quickstart.md
@@ -27,12 +27,14 @@ First, let\'s get Helix. Either build it, or download it.
 
 ### Build
 
+```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
 cd incubator-helix
 git checkout tags/helix-0.6.1-incubating
 mvn install package -DskipTests
 cd helix-core/target/helix-core-pkg/bin # This folder contains all the scripts used in following sections
 chmod +x *
+```
 
 ### Download
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/Tutorial.md b/site-releases/0.6.1-incubating/src/site/markdown/Tutorial.md
index 50dcee9..bdeb58e 100644
--- a/site-releases/0.6.1-incubating/src/site/markdown/Tutorial.md
+++ b/site-releases/0.6.1-incubating/src/site/markdown/Tutorial.md
@@ -26,7 +26,7 @@ Convention: we first cover the _basic_ approach, which is the easiest to impleme
 
 ### Prerequisites
 
-1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
+1. Read [Concepts/Terminology](../../Concepts.html) and [Architecture](../../Architecture.html)
 2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
 3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/tutorial_spectator.md b/site-releases/0.6.1-incubating/src/site/markdown/tutorial_spectator.md
index ed1bd17..881bddb 100644
--- a/site-releases/0.6.1-incubating/src/site/markdown/tutorial_spectator.md
+++ b/site-releases/0.6.1-incubating/src/site/markdown/tutorial_spectator.md
@@ -45,7 +45,7 @@ Helix provides a default implementation RoutingTableProvider that caches the clu
 ```
 manager = HelixManagerFactory.getZKHelixManager(clusterName,
                                                 instanceName,
-                                                InstanceType.PARTICIPANT,
+                                                InstanceType.SPECTATOR,
                                                 zkConnectString);
 manager.connect();
 RoutingTableProvider routingTableProvider = new RoutingTableProvider();

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/HELIX-components.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/HELIX-components.png b/site-releases/0.6.1-incubating/src/site/resources/images/HELIX-components.png
deleted file mode 100644
index c0c35ae..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/HELIX-components.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/bootstrap_statemodel.gif
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/bootstrap_statemodel.gif b/site-releases/0.6.1-incubating/src/site/resources/images/bootstrap_statemodel.gif
deleted file mode 100644
index b8f8a42..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/bootstrap_statemodel.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/helix-architecture.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/helix-architecture.png b/site-releases/0.6.1-incubating/src/site/resources/images/helix-architecture.png
deleted file mode 100644
index 6f69a2d..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/helix-architecture.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/helix-logo.jpg
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/helix-logo.jpg b/site-releases/0.6.1-incubating/src/site/resources/images/helix-logo.jpg
deleted file mode 100644
index d6428f6..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/helix-logo.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/helix-znode-layout.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/helix-znode-layout.png b/site-releases/0.6.1-incubating/src/site/resources/images/helix-znode-layout.png
deleted file mode 100644
index 5bafc45..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/helix-znode-layout.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/statemachine.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/statemachine.png b/site-releases/0.6.1-incubating/src/site/resources/images/statemachine.png
deleted file mode 100644
index 43d27ec..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/statemachine.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.1-incubating/src/site/resources/images/system.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/resources/images/system.png b/site-releases/0.6.1-incubating/src/site/resources/images/system.png
deleted file mode 100644
index f8a05c8..0000000
Binary files a/site-releases/0.6.1-incubating/src/site/resources/images/system.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/apt/releasing.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/apt/releasing.apt b/site-releases/0.6.2-incubating/src/site/apt/releasing.apt
deleted file mode 100644
index 11d0cd9..0000000
--- a/site-releases/0.6.2-incubating/src/site/apt/releasing.apt
+++ /dev/null
@@ -1,107 +0,0 @@
- -----
- Helix release process
- -----
- -----
- 2012-12-15
- -----
-
-~~ Licensed to the Apache Software Foundation (ASF) under one
-~~ or more contributor license agreements.  See the NOTICE file
-~~ distributed with this work for additional information
-~~ regarding copyright ownership.  The ASF licenses this file
-~~ to you under the Apache License, Version 2.0 (the
-~~ "License"); you may not use this file except in compliance
-~~ with the License.  You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing,
-~~ software distributed under the License is distributed on an
-~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-~~ KIND, either express or implied.  See the License for the
-~~ specific language governing permissions and limitations
-~~ under the License.
-
-~~ NOTE: For help with the syntax of this file, see:
-~~ http://maven.apache.org/guides/mini/guide-apt-format.html
-
-Helix release process
-
- [[1]] Post to the dev list a few days before you plan to do an Helix release
-
- [[2]] Your maven setting must contains the entry to be able to deploy.
-
- ~/.m2/settings.xml
-
-+-------------
-   <server>
-     <id>apache.releases.https</id>
-     <username></username>
-     <password></password>
-   </server>
-+-------------
-
- [[3]] Apache DAV passwords
-
-+-------------
- Add the following info into your ~/.netrc
- machine git-wip-us.apache.org login <apache username> <password>
-
-+-------------
- [[4]] Release Helix
-    You should have a GPG agent running in the session you will run the maven release commands(preferred), and confirm it works by running "gpg -ab" (type some text and press Ctrl-D).
-    If you do not have a GPG agent running, make sure that you have the "apache-release" profile set in your settings.xml as shown below.
-
-   Run the release
-
-+-------------
-mvn release:prepare release:perform -B
-+-------------
-
-  GPG configuration in maven settings xml:
-
-+-------------
-<profile>
-  <id>apache-release</id>
-  <properties>
-    <gpg.passphrase>[GPG_PASSWORD]</gpg.passphrase>
-  </properties>
-</profile>
-+-------------
-
- [[4]] go to https://repository.apache.org and close your staged repository. Note the repository url (format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
-
-+-------------
-svn co https://dist.apache.org/repos/dist/dev/incubator/helix helix-dev-release
-cd helix-dev-release
-sh ./release-script-svn.sh version stagingRepoUrl
-then svn add <new directory created with new version as name>
-then svn ci 
-+-------------
-
- [[5]] Validating the release
-
-+-------------
-  * Download sources, extract, build and run tests - mvn clean package
-  * Verify license headers - mvn -Prat -DskipTests
-  * Download binaries and .asc files
-  * Download release manager's public key - From the KEYS file, get the release manager's public key finger print and run  gpg --keyserver pgpkeys.mit.edu --recv-key <key>
-  * Validate authenticity of key - run  gpg --fingerprint <key>
-  * Check signatures of all the binaries using gpg <binary>
-+-------------
-
- [[6]] Call for a vote in the dev list and wait for 72 hrs. for the vote results. 3 binding votes are necessary for the release to be finalized. example
-  After the vote has passed, move the files from dist dev to dist release: svn mv https://dist.apache.org/repos/dist/dev/incubator/helix/version to https://dist.apache.org/repos/dist/release/incubator/helix/
-
- [[7]] Prepare release note. Add a page in src/site/apt/releasenotes/ and change value of \<currentRelease> in parent pom.
-
-
- [[8]] Send out an announcement of the release to:
-
-  * users@helix.incubator.apache.org
-
-  * dev@helix.incubator.apache.org
-
- [[9]] Celebrate !
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md b/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md
deleted file mode 100644
index 933e917..0000000
--- a/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md
+++ /dev/null
@@ -1,252 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Architecture</title>
-</head>
-
-Architecture
-----------------------------
-Helix aims to provide the following abilities to a distributed system:
-
-* Automatic management of a cluster hosting partitioned, replicated resources.
-* Soft and hard failure detection and handling.
-* Automatic load balancing via smart placement of resources on servers(nodes) based on server capacity and resource profile (size of partition, access patterns, etc).
-* Centralized config management and self discovery. Eliminates the need to modify config on each node.
-* Fault tolerance and optimized rebalancing during cluster expansion.
-* Manages entire operational lifecycle of a node. Addition, start, stop, enable/disable without downtime.
-* Monitor cluster health and provide alerts on SLA violation.
-* Service discovery mechanism to route requests.
-
-To build such a system, we need a mechanism to co-ordinate between different nodes and other components in the system. This mechanism can be achieved with software that reacts to any change in the cluster and comes up with a set of tasks needed to bring the cluster to a stable state. The set of tasks will be assigned to one or more nodes in the cluster. Helix serves this purpose of managing the various components in the cluster.
-
-![Helix Design](images/system.png)
-
-Distributed System Components
-
-In general any distributed system cluster will have the following components and properties:
-
-* A set of nodes also referred to as instances.
-* A set of resources which can be databases, lucene indexes or tasks.
-* Each resource is also partitioned into one or more Partitions. 
-* Each partition may have one or more copies called replicas.
-* Each replica can have a state associated with it. For example Master, Slave, Leader, Standby, Online, Offline etc
-
-Roles
------
-
-![Helix Design](images/HELIX-components.png)
-
-Not all nodes in a distributed system will perform similar functionalities. For example, a few nodes might be serving requests and a few nodes might be sending requests, and some nodes might be controlling the nodes in the cluster. Thus, Helix categorizes nodes by their specific roles in the system.
-
-We have divided Helix nodes into 3 logical components based on their responsibilities:
-
-1. Participant: The nodes that actually host the distributed resources.
-2. Spectator: The nodes that simply observe the Participant state and route the request accordingly. Routers, for example, need to know the instance on which a partition is hosted and its state in order to route the request to the appropriate end point.
-3. Controller: The controller observes and controls the Participant nodes. It is responsible for coordinating all transitions in the cluster and ensuring that state constraints are satisfied and cluster stability is maintained. 
-
-These are simply logical components and can be deployed as per the system requirements. For example:
-
-1. The controller can be deployed as a separate service
-2. The controller can be deployed along with a Participant but only one Controller will be active at any given time.
-
-Both have pros and cons, which will be discussed later and one can chose the mode of deployment as per system needs.
-
-
-## Cluster state metadata store
-
-We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state. Helix uses Zookeeper to achieve this functionality.
-
-Zookeeper provides:
-
-* A way to represent PERSISTENT state which basically remains until its deleted.
-* A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies.
-* Notification mechanism when there is a change in PERSISTENT and EPHEMERAL state
-
-The namespace provided by ZooKeeper is much like that of a standard file system. A name is a sequence of path elements separated by a slash (/). Every node[ZNode] in ZooKeeper\'s namespace is identified by a path.
-
-More info on Zookeeper can be found at http://zookeeper.apache.org
-
-## State machine and constraints
-
-Even though the concepts of Resources, Partitions, and Replicas are common to most distributed systems, one thing that differentiates one distributed system from another is the way each partition is assigned a state and the constraints on each state.
-
-For example:
-
-1. If a system is serving read-only data then all partition\'s replicas are equal and they can either be ONLINE or OFFLINE.
-2. If a system takes _both_ reads and writes but ensure that writes go through only one partition, the states will be MASTER, SLAVE, and OFFLINE. Writes go through the MASTER and replicate to the SLAVEs. Optionally, reads can go through SLAVES.
-
-Apart from defining state for each partition, the transition path to each state can be application specific. For example, in order to become MASTER it might be a requirement to first become a SLAVE. This ensures that if the SLAVE does not have the data as part of OFFLINE-SLAVE transition it can bootstrap data from other nodes in the system.
-
-Helix provides a way to configure an application specific state machine along with constraints on each state. Along with constraints on STATE, Helix also provides a way to specify constraints on transitions.  (More on this later.)
-
-```
-          OFFLINE  | SLAVE  |  MASTER  
-         _____________________________
-        |          |        |         |
-OFFLINE |   N/A    | SLAVE  | SLAVE   |
-        |__________|________|_________|
-        |          |        |         |
-SLAVE   |  OFFLINE |   N/A  | MASTER  |
-        |__________|________|_________|
-        |          |        |         |
-MASTER  | SLAVE    | SLAVE  |   N/A   |
-        |__________|________|_________|
-
-```
-
-![Helix Design](images/statemachine.png)
-
-## Concepts
-
-The following terminologies are used in Helix to model a state machine.
-
-* IdealState: The state in which we need the cluster to be in if all nodes are up and running. In other words, all state constraints are satisfied.
-* CurrentState: Represents the actual current state of each node in the cluster 
-* ExternalView: Represents the combined view of CurrentState of all nodes.  
-
-The goal of Helix is always to make the CurrentState of the system same as the IdealState. Some scenarios where this may not be true are:
-
-* When all nodes are down
-* When one or more nodes fail
-* New nodes are added and the partitions need to be reassigned
-
-### IdealState
-
-Helix lets the application define the IdealState on a resource basis which basically consists of:
-
-* List of partitions. Example: 64
-* Number of replicas for each partition. Example: 3
-* Node and State for each replica.
-
-Example:
-
-* Partition-1, replica-1, Master, Node-1
-* Partition-1, replica-2, Slave, Node-2
-* Partition-1, replica-3, Slave, Node-3
-* .....
-* .....
-* Partition-p, replica-3, Slave, Node-n
-
-Helix comes with various algorithms to automatically assign the partitions to nodes. The default algorithm minimizes the number of shuffles that happen when new nodes are added to the system.
-
-### CurrentState
-
-Every instance in the cluster hosts one or more partitions of a resource. Each of the partitions has a state associated with it.
-
-Example Node-1
-
-* Partition-1, Master
-* Partition-2, Slave
-* ....
-* ....
-* Partition-p, Slave
-
-### ExternalView
-
-External clients needs to know the state of each partition in the cluster and the Node hosting that partition. Helix provides one view of the system to Spectators as _ExternalView_. ExternalView is simply an aggregate of all node CurrentStates.
-
-* Partition-1, replica-1, Master, Node-1
-* Partition-1, replica-2, Slave, Node-2
-* Partition-1, replica-3, Slave, Node-3
-* .....
-* .....
-* Partition-p, replica-3, Slave, Node-n
-
-## Process Workflow
-
-Mode of operation in a cluster
-
-A node process can be one of the following:
-
-* Participant: The process registers itself in the cluster and acts on the messages received in its queue and updates the current state.  Example: a storage node in a distributed database
-* Spectator: The process is simply interested in the changes in the Externalview.
-* Controller: This process actively controls the cluster by reacting to changes in cluster state and sending messages to Participants.
-
-
-### Participant Node Process
-
-* When Node starts up, it registers itself under _LiveInstances_
-* After registering, it waits for new _Messages_ in the message queue
-* When it receives a message, it will perform the required task as indicated in the message
-* After the task is completed, depending on the task outcome it updates the CurrentState
-
-### Controller Process
-
-* Watches IdealState
-* Notified when a node goes down/comes up or node is added/removed. Watches LiveInstances and CurrentState of each node in the cluster
-* Triggers appropriate state transitions by sending message to Participants
-
-### Spectator Process
-
-* When the process starts, it asks the Helix agent to be notified of changes in ExternalView
-* Whenever it receives a notification, it reads the Externalview and performs required duties.
-
-#### Interaction between controller, participant and spectator
-
-The following picture shows how controllers, participants and spectators interact with each other.
-
-![Helix Architecture](images/helix-architecture.png)
-
-## Core algorithm
-
-* Controller gets the IdealState and the CurrentState of active storage nodes from Zookeeper
-* Compute the delta between IdealState and CurrentState for each partition across all participant nodes
-* For each partition compute tasks based on the State Machine Table. It\'s possible to configure priority on the state Transition. For example, in case of Master-Slave:
-    * Attempt mastership transfer if possible without violating constraint.
-    * Partition Addition
-    * Drop Partition 
-* Add the tasks in parallel if possible to the respective queue for each storage node (if the tasks added are mutually independent)
-* If a task is dependent on another task being completed, do not add that task
-* After any task is completed by a Participant, Controllers gets notified of the change and the State Transition algorithm is re-run until the CurrentState is same as IdealState.
-
-## Helix ZNode layout
-
-Helix organizes znodes under clusterName in multiple levels. 
-
-The top level (under the cluster name) ZNodes are all Helix-defined and in upper case:
-
-* PROPERTYSTORE: application property store
-* STATEMODELDEFES: state model definitions
-* INSTANCES: instance runtime information including current state and messages
-* CONFIGS: configurations
-* IDEALSTATES: ideal states
-* EXTERNALVIEW: external views
-* LIVEINSTANCES: live instances
-* CONTROLLER: cluster controller runtime information
-
-Under INSTANCES, there are runtime ZNodes for each instance. An instance organizes ZNodes as follows:
-
-* CURRENTSTATES
-    * sessionId
-    * resourceName
-* ERRORS
-* STATUSUPDATES
-* MESSAGES
-* HEALTHREPORT
-
-Under CONFIGS, there are different scopes of configurations:
-
-* RESOURCE: contains resource scope configurations
-* CLUSTER: contains cluster scope configurations
-* PARTICIPANT: contains participant scope configurations
-
-The following image shows an example of Helix znodes layout for a cluster named "test-cluster":
-
-![Helix znode layout](images/helix-znode-layout.png)

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/Building.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Building.md b/site-releases/0.6.2-incubating/src/site/markdown/Building.md
index bf9462b..fd16376 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/Building.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Building.md
@@ -20,7 +20,9 @@ under the License.
 Build Instructions
 ------------------
 
-Requirements: Jdk 1.6+, Maven 2.0.8+
+### From Source
+
+Requirements: JDK 1.6+, Maven 2.0.8+
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
@@ -29,7 +31,7 @@ git checkout tags/helix-0.6.2-incubating
 mvn install package -DskipTests
 ```
 
-Maven dependency
+### Maven Dependency
 
 ```
 <dependency>
@@ -38,9 +40,3 @@ Maven dependency
   <version>0.6.2-incubating</version>
 </dependency>
 ```
-
-Download
---------
-
-[0.6.2-incubating](./download.html)
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md b/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md
deleted file mode 100644
index fa5d0ba..0000000
--- a/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md
+++ /dev/null
@@ -1,275 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Concepts</title>
-</head>
-
-Concepts
-----------------------------
-
-Helix is based on the idea that a given task has the following attributes associated with it:
-
-* _Location of the task_. For example it runs on Node N1
-* _State_. For example, it is running, stopped etc.
-
-In Helix terminology, a task is referred to as a _resource_.
-
-### IdealState
-
-IdealState simply allows one to map tasks to location and state. A standard way of expressing this in Helix:
-
-```
-  "TASK_NAME" : {
-    "LOCATION" : "STATE"
-  }
-
-```
-Consider a simple case where you want to launch a task \'myTask\' on node \'N1\'. The IdealState for this can be expressed as follows:
-
-```
-{
-  "id" : "MyTask",
-  "mapFields" : {
-    "myTask" : {
-      "N1" : "ONLINE",
-    }
-  }
-}
-```
-### Partition
-
-If this task get too big to fit on one box, you might want to divide it into subtasks. Each subtask is referred to as a _partition_ in Helix. Let\'s say you want to divide the task into 3 subtasks/partitions, the IdealState can be changed as shown below. 
-
-\'myTask_0\', \'myTask_1\', \'myTask_2\' are logical names representing the partitions of myTask. Each tasks runs on N1, N2 and N3 respectively.
-
-```
-{
-  "id" : "myTask",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-  }
- "mapFields" : {
-    "myTask_0" : {
-      "N1" : "ONLINE",
-    },
-    "myTask_1" : {
-      "N2" : "ONLINE",
-    },
-    "myTask_2" : {
-      "N3" : "ONLINE",
-    }
-  }
-}
-```
-
-### Replica
-
-Partitioning allows one to split the data/task into multiple subparts. But let\'s say the request rate for each partition increases. The common solution is to have multiple copies for each partition. Helix refers to the copy of a partition as a _replica_.  Adding a replica also increases the availability of the system during failures. One can see this methodology employed often in search systems. The index is divided into shards, and each shard has multiple copies.
-
-Let\'s say you want to add one additional replica for each task. The IdealState can simply be changed as shown below. 
-
-For increasing the availability of the system, it\'s better to place the replica of a given partition on different nodes.
-
-```
-{
-  "id" : "myIndex",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-  },
- "mapFields" : {
-    "myIndex_0" : {
-      "N1" : "ONLINE",
-      "N2" : "ONLINE"
-    },
-    "myIndex_1" : {
-      "N2" : "ONLINE",
-      "N3" : "ONLINE"
-    },
-    "myIndex_2" : {
-      "N3" : "ONLINE",
-      "N1" : "ONLINE"
-    }
-  }
-}
-```
-
-### State 
-
-Now let\'s take a slightly more complicated scenario where a task represents a database.  Unlike an index which is in general read-only, a database supports both reads and writes. Keeping the data consistent among the replicas is crucial in distributed data stores. One commonly applied technique is to assign one replica as the MASTER and remaining replicas as SLAVEs. All writes go to the MASTER and are then replicated to the SLAVE replicas.
-
-Helix allows one to assign different states to each replica. Let\'s say you have two MySQL instances N1 and N2, where one will serve as MASTER and another as SLAVE. The IdealState can be changed to:
-
-```
-{
-  "id" : "myDB",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "1",
-    "REPLICAS" : "2",
-  },
-  "mapFields" : {
-    "myDB" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    }
-  }
-}
-
-```
-
-
-### State Machine and Transitions
-
-IdealState allows one to exactly specify the desired state of the cluster. Given an IdealState, Helix takes up the responsibility of ensuring that the cluster reaches the IdealState.  The Helix _controller_ reads the IdealState and then commands each Participant to take appropriate actions to move from one state to another until it matches the IdealState.  These actions are referred to as _transitions_ in Helix.
-
-The next logical question is:  how does the _controller_ compute the transitions required to get to IdealState?  This is where the finite state machine concept comes in. Helix allows applications to plug in a finite state machine.  A state machine consists of the following:
-
-* State: Describes the role of a replica
-* Transition: An action that allows a replica to move from one state to another, thus changing its role.
-
-Here is an example of MasterSlave state machine:
-
-```
-          OFFLINE  | SLAVE  |  MASTER  
-         _____________________________
-        |          |        |         |
-OFFLINE |   N/A    | SLAVE  | SLAVE   |
-        |__________|________|_________|
-        |          |        |         |
-SLAVE   |  OFFLINE |   N/A  | MASTER  |
-        |__________|________|_________|
-        |          |        |         |
-MASTER  | SLAVE    | SLAVE  |   N/A   |
-        |__________|________|_________|
-
-```
-
-Helix allows each resource to be associated with one state machine. This means you can have one resource as an index and another as a database in the same cluster. One can associate each resource with a state machine as follows:
-
-```
-{
-  "id" : "myDB",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "1",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "myDB" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    }
-  }
-}
-
-```
-
-### Current State
-
-CurrentState of a resource simply represents its actual state at a Participant. In the below example:
-
-* INSTANCE_NAME: Unique name representing the process
-* SESSION_ID: ID that is automatically assigned every time a process joins the cluster
-
-```
-{
-  "id":"MyResource"
-  ,"simpleFields":{
-    ,"SESSION_ID":"13d0e34675e0002"
-    ,"INSTANCE_NAME":"node1"
-    ,"STATE_MODEL_DEF":"MasterSlave"
-  }
-  ,"mapFields":{
-    "MyResource_0":{
-      "CURRENT_STATE":"SLAVE"
-    }
-    ,"MyResource_1":{
-      "CURRENT_STATE":"MASTER"
-    }
-    ,"MyResource_2":{
-      "CURRENT_STATE":"MASTER"
-    }
-  }
-}
-```
-Each node in the cluster has its own CurrentState.
-
-### External View
-
-In order to communicate with the Participants, external clients need to know the current state of each of the Participants. The external clients are referred to as Spectators. In order to make the life of Spectator simple, Helix provides an ExternalView that is an aggregated view of the current state across all nodes. The ExternalView has a similar format as IdealState.
-
-```
-{
-  "id":"MyResource",
-  "mapFields":{
-    "MyResource_0":{
-      "N1":"SLAVE",
-      "N2":"MASTER",
-      "N3":"OFFLINE"
-    },
-    "MyResource_1":{
-      "N1":"MASTER",
-      "N2":"SLAVE",
-      "N3":"ERROR"
-    },
-    "MyResource_2":{
-      "N1":"MASTER",
-      "N2":"SLAVE",
-      "N3":"SLAVE"
-    }
-  }
-}
-```
-
-### Rebalancer
-
-The core component of Helix is the Controller which runs the Rebalancer algorithm on every cluster event. Cluster events can be one of the following:
-
-* Nodes start/stop and soft/hard failures
-* New nodes are added/removed
-* Ideal state changes
-
-There are few more examples such as configuration changes, etc.  The key takeaway: there are many ways to trigger the rebalancer.
-
-When a rebalancer is run it simply does the following:
-
-* Compares the IdealState and current state
-* Computes the transitions required to reach the IdealState
-* Issues the transitions to each Participant
-
-The above steps happen for every change in the system. Once the current state matches the IdealState, the system is considered stable which implies \'IdealState = CurrentState = ExternalView\'
-
-### Dynamic IdealState
-
-One of the things that makes Helix powerful is that IdealState can be changed dynamically. This means one can listen to cluster events like node failures and dynamically change the ideal state. Helix will then take care of triggering the respective transitions in the system.
-
-Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
-
-
-
-
-
-
-
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md b/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
index 533a48c..a85edfb 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
@@ -21,23 +21,28 @@ under the License.
   <title>Quickstart</title>
 </head>
 
+Quickstart
+---------
+
 Get Helix
 ---------
 
-First, let\'s get Helix, either build it, or download.
+First, let\'s get Helix. Either build it, or download it.
 
 ### Build
 
-    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-    cd incubator-helix
-    git checkout tags/helix-0.6.2-incubating
-    ./build
-    cd helix-core/target/helix-core-pkg/bin //This folder contains all the scripts used in following sections
-    chmod +x *
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
+mvn install package -DskipTests
+cd helix-core/target/helix-core-pkg/bin # This folder contains all the scripts used in following sections
+chmod +x *
+```
 
 ### Download
 
-Download the 0.6.2-incubating release package [here](./download.html) 
+Download the 0.6.2-incubating release package [here](./download.html)
 
 Overview
 --------
@@ -50,12 +55,12 @@ Let\'s Do It
 
 Helix provides command line interfaces to set up the cluster and view the cluster state. The best way to understand how Helix views a cluster is to build a cluster.
 
-#### First, get to the tools directory
+### Get to the Tools Directory
 
-If you built the code
+If you built the code:
 
 ```
-cd incubator-helix/helix-core/target/helix-core-pkg/bin
+cd helix/incubator-helix/helix-core/target/helix-core-pkg/bin
 ```
 
 If you downloaded the release package, extract it.
@@ -74,66 +79,72 @@ You can observe the components working together in this demo, which does the fol
 * Kill the third node (Helix takes care of failover)
 * Show the cluster state.  Note that the two surviving nodes take over mastership of the partitions from the failed node
 
-##### Run the demo
+### Run the Demo
 
 ```
-cd incubator-helix/helix-core/target/helix-core-pkg/bin
+cd helix/incubator-helix/helix-core/target/helix-core-pkg/bin
 ./quickstart.sh
 ```
 
-##### 2 nodes are set up and the partitions rebalanced
+#### The Initial Setup
+
+2 nodes are set up and the partitions are rebalanced.
 
 The cluster state is as follows:
 
 ```
 CLUSTER STATE: After starting 2 nodes
-	                     localhost_12000	localhost_12001	
-	       MyResource_0	M			S		
-	       MyResource_1	S			M		
-	       MyResource_2	M			S		
-	       MyResource_3	M			S		
-	       MyResource_4	S			M  
-	       MyResource_5	S			M  
+                localhost_12000    localhost_12001
+MyResource_0           M                  S
+MyResource_1           S                  M
+MyResource_2           M                  S
+MyResource_3           M                  S
+MyResource_4           S                  M
+MyResource_5           S                  M
 ```
 
 Note there is one master and one slave per partition.
 
-##### A third node is added and the cluster rebalanced
+#### Add a Node
+
+A third node is added and the cluster is rebalanced.
 
 The cluster state changes to:
 
 ```
 CLUSTER STATE: After adding a third node
-                 	       localhost_12000	    localhost_12001	localhost_12002	
-	       MyResource_0	    S			  M		      S		
-	       MyResource_1	    S			  S		      M	 
-	       MyResource_2	    M			  S	              S  
-	       MyResource_3	    S			  S                   M  
-	       MyResource_4	    M			  S	              S  
-	       MyResource_5	    S			  M                   S  
+               localhost_12000    localhost_12001    localhost_12002
+MyResource_0          S                  M                  S
+MyResource_1          S                  S                  M
+MyResource_2          M                  S                  S
+MyResource_3          S                  S                  M
+MyResource_4          M                  S                  S
+MyResource_5          S                  M                  S
 ```
 
 Note there is one master and _two_ slaves per partition.  This is expected because there are three nodes.
 
-##### Finally, a node is killed to simulate a failure
+#### Kill a Node
+
+Finally, a node is killed to simulate a failure
 
 Helix makes sure each partition has a master.  The cluster state changes to:
 
 ```
 CLUSTER STATE: After the 3rd node stops/crashes
-                	       localhost_12000	  localhost_12001	localhost_12002	
-	       MyResource_0	    S			M		      -		
-	       MyResource_1	    S			M		      -	 
-	       MyResource_2	    M			S	              -  
-	       MyResource_3	    M			S                     -  
-	       MyResource_4	    M			S	              -  
-	       MyResource_5	    S			M                     -  
+               localhost_12000    localhost_12001    localhost_12002
+MyResource_0          S                  M                  -
+MyResource_1          S                  M                  -
+MyResource_2          M                  S                  -
+MyResource_3          M                  S                  -
+MyResource_4          M                  S                  -
+MyResource_5          S                  M                  -
 ```
 
 
 Long Version
 ------------
-Now you can run the same steps by hand.  In the detailed version, we\'ll do the following:
+Now you can run the same steps by hand.  In this detailed version, we\'ll do the following:
 
 * Define a cluster
 * Add two nodes to the cluster
@@ -142,20 +153,22 @@ Now you can run the same steps by hand.  In the detailed version, we\'ll do the
 * Expand the cluster: add a few nodes and rebalance the partitions
 * Failover: stop a node and verify the mastership transfer
 
-### Install and Start Zookeeper
+### Install and Start ZooKeeper
 
 Zookeeper can be started in standalone mode or replicated mode.
 
-More info is available at 
+More information is available at
 
 * http://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
 * http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_zkMulitServerSetup
 
 In this example, let\'s start zookeeper in local mode.
 
-##### start zookeeper locally on port 2199
+#### Start ZooKeeper Locally on Port 2199
 
-    ./start-standalone-zookeeper.sh 2199 &
+```
+./start-standalone-zookeeper.sh 2199 &
+```
 
 ### Define the Cluster
 
@@ -165,62 +178,74 @@ zookeeper_address is of the format host:port e.g localhost:2199 for standalone o
 
 Next, we\'ll set up a cluster MYCLUSTER cluster with these attributes:
 
-* 3 instances running on localhost at ports 12913,12914,12915 
-* One database named myDB with 6 partitions 
+* 3 instances running on localhost at ports 12913,12914,12915
+* One database named myDB with 6 partitions
 * Each partition will have 3 replicas with 1 master, 2 slaves
-* zookeeper running locally at localhost:2199
+* ZooKeeper running locally at localhost:2199
 
-##### Create the cluster MYCLUSTER
-    ## helix-admin.sh --zkSvr <zk_address> --addCluster <clustername> 
-    ./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER 
+#### Create the Cluster MYCLUSTER
 
-##### Add nodes to the cluster
+```
+# ./helix-admin.sh --zkSvr <zk_address> --addCluster <clustername>
+./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER
+```
+
+### Add Nodes to the Cluster
 
 In this case we\'ll add three nodes: localhost:12913, localhost:12914, localhost:12915
 
-    ## helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+```
+# helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+```
 
-#### Define the resource and partitioning
+### Define the Resource and Partitioning
 
-In this example, the resource is a database, partitioned 6 ways.  (In a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.)
+In this example, the resource is a database, partitioned 6 ways. Note that in a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.
 
-##### Create a database with 6 partitions using the MasterSlave state model. 
+#### Create a Database with 6 Partitions using the MasterSlave State Model
 
 Helix ensures there will be exactly one master for each partition.
 
-    ## helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
-    ./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
-   
-##### Now we can let Helix assign partitions to nodes. 
+```
+# helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
+./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
+```
 
-This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
+#### Let Helix Assign Partitions to Nodes
 
-    ## helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
-    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
 
-Now the cluster is defined in Zookeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model).  And the _ideal state_ has been calculated, assuming a replication factor of 3.
+```
+# helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
+./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+```
 
-##### Start the Helix Controller
+Now the cluster is defined in ZooKeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model) are all properly configured.  And the _IdealState_ has been calculated, assuming a replication factor of 3.
 
-Now that the cluster is defined in Zookeeper, the Helix controller can manage the cluster.
+### Start the Helix Controller
 
-    ## Start the cluster manager, which will manage MYCLUSTER
-    ./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+Now that the cluster is defined in ZooKeeper, the Helix controller can manage the cluster.
 
-##### Start up the cluster to be managed
+```
+# Start the cluster manager, which will manage MYCLUSTER
+./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+```
 
-We\'ve started up Zookeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
+### Start up the Cluster to be Managed
 
-    # start up each instance.  These are mock implementations that are actively managed by Helix
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log 
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+We\'ve started up ZooKeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
 
+```
+# start up each instance.  These are mock implementations that are actively managed by Helix
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+```
 
-#### Inspect the Cluster
+### Inspect the Cluster
 
 Now, let\'s see the Helix view of our cluster.  We\'ll work our way down as follows:
 
@@ -233,17 +258,17 @@ Clusters -> MYCLUSTER -> instances -> instance detail
 A single Helix controller can manage multiple clusters, though so far, we\'ve only defined one cluster.  Let\'s see:
 
 ```
-## List existing clusters
-./helix-admin.sh --zkSvr localhost:2199 --listClusters        
+# List existing clusters
+./helix-admin.sh --zkSvr localhost:2199 --listClusters
 
 Existing clusters:
 MYCLUSTER
 ```
-                                       
-Now, let\'s see the Helix view of MYCLUSTER
+
+Now, let\'s see the Helix view of MYCLUSTER:
 
 ```
-## helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName> 
+# helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName>
 ./helix-admin.sh --zkSvr localhost:2199 --listClusterInfo MYCLUSTER
 
 Existing resources in cluster MYCLUSTER:
@@ -254,11 +279,10 @@ localhost_12914
 localhost_12913
 ```
 
-
-Let\'s look at the details of an instance
+Let\'s look at the details of an instance:
 
 ```
-## ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>    
+# ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>
 ./helix-admin.sh --zkSvr localhost:2199 --listInstanceInfo MYCLUSTER localhost_12913
 
 InstanceConfig: {
@@ -275,11 +299,11 @@ InstanceConfig: {
 }
 ```
 
-    
-##### Query info of a resource
+
+#### Query Information about a Resource
 
 ```
-## helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
+# helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
 ./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
 
 IdealState for myDB:
@@ -326,6 +350,7 @@ IdealState for myDB:
     "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
   },
   "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
     "REBALANCE_MODE" : "SEMI_AUTO",
     "NUM_PARTITIONS" : "6",
     "REPLICAS" : "3",
@@ -379,30 +404,38 @@ ExternalView for myDB:
 
 Now, let\'s look at one of the partitions:
 
-    ## helix-admin.sh --zkSvr <zk_address> --listPartitionInfo <clusterName> <resource> <partition> 
-    ./helix-admin.sh --zkSvr localhost:2199 --listPartitionInfo MYCLUSTER myDB myDB_0
+```
+# helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <partition>
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo mycluster myDB_0
+```
 
-#### Expand the Cluster
+### Expand the Cluster
 
 Next, we\'ll show how Helix does the work that you\'d otherwise have to build into your system.  When you add capacity to your cluster, you want the work to be evenly distributed.  In this example, we started with 3 nodes, with 6 partitions.  The partitions were evenly balanced, 2 masters and 4 slaves per node. Let\'s add 3 more nodes: localhost:12916, localhost:12917, localhost:12918
 
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+```
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
+./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+```
 
 And start up these instances:
 
-    # start up each instance.  These are mock implementations that are actively managed by Helix
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+```
+# start up each instance.  These are mock implementations that are actively managed by Helix
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
+./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+```
 
 
 And now, let Helix do the work for you.  To shift the work, simply rebalance.  After the rebalance, each node will have one master and two slaves.
 
-    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+```
+./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+```
 
-#### View the cluster
+### View the Cluster
 
 OK, let\'s see how it looks:
 
@@ -454,6 +487,7 @@ IdealState for myDB:
     "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
   },
   "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
     "REBALANCE_MODE" : "SEMI_AUTO",
     "NUM_PARTITIONS" : "6",
     "REPLICAS" : "3",
@@ -507,7 +541,7 @@ ExternalView for myDB:
 
 Mission accomplished.  The partitions are nicely balanced.
 
-#### How about Failover?
+### How about Failover?
 
 Building a fault tolerant system isn\'t trivial, but with Helix, it\'s easy.  Helix detects a failed instance, and triggers mastership transfer automatically.
 
@@ -563,6 +597,7 @@ IdealState for myDB:
     "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
   },
   "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
     "REBALANCE_MODE" : "SEMI_AUTO",
     "NUM_PARTITIONS" : "6",
     "REPLICAS" : "3",
@@ -612,15 +647,17 @@ ExternalView for myDB:
 
 As we\'ve seen in this Quickstart, Helix takes care of partitioning, load balancing, elasticity, failure detection and recovery.
 
-##### ZooInspector
+### ZooInspector
 
 You can view all of the underlying data by going direct to zookeeper.  Use ZooInspector that comes with zookeeper to browse the data. This is a java applet (make sure you have X windows)
 
 To start zooinspector run the following command from <zk_install_directory>/contrib/ZooInspector
-      
-    java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
 
-#### Next
+```
+java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
+```
+
+### Next
 
-Now that you understand the idea of Helix, read the [tutorial](./tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
+Now that you understand the idea of Helix, read the [tutorial](./Tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md b/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
index 61221b7..2c51b72 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
@@ -30,7 +30,7 @@ Convention: we first cover the _basic_ approach, which is the easiest to impleme
 
 ### Prerequisites
 
-1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
+1. Read [Concepts/Terminology](../../Concepts.html) and [Architecture](../../Architecture.html)
 2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
 3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
 
@@ -53,29 +53,29 @@ Convention: we first cover the _basic_ approach, which is the easiest to impleme
 
 First, we need to set up the system.  Let\'s walk through the steps in building a distributed system using Helix.
 
-### Start Zookeeper
+#### Start ZooKeeper
 
-This starts a zookeeper in standalone mode. For production deployment, see [Apache Zookeeper](http://zookeeper.apache.org) for instructions.
+This starts a zookeeper in standalone mode. For production deployment, see [Apache ZooKeeper](http://zookeeper.apache.org) for instructions.
 
 ```
-    ./start-standalone-zookeeper.sh 2199 &
+./start-standalone-zookeeper.sh 2199 &
 ```
 
-### Create a cluster
+#### Create a Cluster
 
-Creating a cluster will define the cluster in appropriate znodes on zookeeper.   
+Creating a cluster will define the cluster in appropriate znodes on ZooKeeper.
 
-Using the java API:
+Using the Java API:
 
 ```
-    // Create setup tool instance
-    // Note: ZK_ADDRESS is the host:port of Zookeeper
-    String ZK_ADDRESS = "localhost:2199";
-    admin = new ZKHelixAdmin(ZK_ADDRESS);
-
-    String CLUSTER_NAME = "helix-demo";
-    //Create cluster namespace in zookeeper
-    admin.addCluster(CLUSTER_NAME);
+// Create setup tool instance
+// Note: ZK_ADDRESS is the host:port of Zookeeper
+String ZK_ADDRESS = "localhost:2199";
+admin = new ZKHelixAdmin(ZK_ADDRESS);
+
+String CLUSTER_NAME = "helix-demo";
+//Create cluster namespace in zookeeper
+admin.addCluster(CLUSTER_NAME);
 ```
 
 OR
@@ -83,56 +83,54 @@ OR
 Using the command-line interface:
 
 ```
-    ./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo 
+./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo
 ```
 
 
-### Configure the nodes of the cluster
+#### Configure the Nodes of the Cluster
 
-First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable. 
+First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable.
 The most commonly used convention is hostname:port.
 
 ```
-    String CLUSTER_NAME = "helix-demo";
-    int NUM_NODES = 2;
-    String hosts[] = new String[]{"localhost","localhost"};
-    String ports[] = new String[]{7000,7001};
-    for (int i = 0; i < NUM_NODES; i++)
-    {
-      
-      InstanceConfig instanceConfig = new InstanceConfig(hosts[i]+ "_" + ports[i]);
-      instanceConfig.setHostName(hosts[i]);
-      instanceConfig.setPort(ports[i]);
-      instanceConfig.setInstanceEnabled(true);
-
-      //Add additional system specific configuration if needed. These can be accessed during the node start up.
-      instanceConfig.getRecord().setSimpleField("key", "value");
-      admin.addInstance(CLUSTER_NAME, instanceConfig);
-      
-    }
+String CLUSTER_NAME = "helix-demo";
+int NUM_NODES = 2;
+String hosts[] = new String[]{"localhost","localhost"};
+String ports[] = new String[]{7000,7001};
+for (int i = 0; i < NUM_NODES; i++)
+{
+  InstanceConfig instanceConfig = new InstanceConfig(hosts[i]+ "_" + ports[i]);
+  instanceConfig.setHostName(hosts[i]);
+  instanceConfig.setPort(ports[i]);
+  instanceConfig.setInstanceEnabled(true);
+
+  //Add additional system specific configuration if needed. These can be accessed during the node start up.
+  instanceConfig.getRecord().setSimpleField("key", "value");
+  admin.addInstance(CLUSTER_NAME, instanceConfig);
+}
 ```
 
-### Configure the resource
+#### Configure the Resource
 
-A _resource_ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
-A _resource_ can be divided into many sub-parts known as _partitions_.
+A __resource__ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
+A resource can be divided into many sub-parts known as __partitions__.
 
 
-#### Define the _state model_ and _constraints_
+##### Define the State Model and Constraints
 
-For scalability and fault tolerance, each partition can have one or more replicas. 
-The _state model_ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
+For scalability and fault tolerance, each partition can have one or more replicas.
+The __state model__ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
 A simple model is ONLINE-OFFLINE where ONLINE means the task is active and OFFLINE means it\'s not active.
-You can also specify how many replicas must be in each state, these are known as _constraints_.
+You can also specify how many replicas must be in each state, these are known as __constraints__.
 For example, in a search system, one might need more than one node serving the same index to handle the load.
 
-The allowed states: 
+The allowed states:
 
 * MASTER
 * SLAVE
 * OFFLINE
 
-The allowed transitions: 
+The allowed transitions:
 
 * OFFLINE to SLAVE
 * SLAVE to OFFLINE
@@ -144,62 +142,60 @@ The constraints:
 * no more than 1 MASTER per partition
 * the rest of the replicas should be slaves
 
-The following snippet shows how to declare the _state model_ and _constraints_ for the MASTER-SLAVE model.
+The following snippet shows how to declare the state model and constraints for the MASTER-SLAVE model.
 
 ```
+StateModelDefinition.Builder builder = new StateModelDefinition.Builder(STATE_MODEL_NAME);
 
-    StateModelDefinition.Builder builder = new StateModelDefinition.Builder(STATE_MODEL_NAME);
+// Add states and their rank to indicate priority. A lower rank corresponds to a higher priority
+builder.addState(MASTER, 1);
+builder.addState(SLAVE, 2);
+builder.addState(OFFLINE);
 
-    // Add states and their rank to indicate priority. A lower rank corresponds to a higher priority
-    builder.addState(MASTER, 1);
-    builder.addState(SLAVE, 2);
-    builder.addState(OFFLINE);
+// Set the initial state when the node starts
+builder.initialState(OFFLINE);
 
-    // Set the initial state when the node starts
-    builder.initialState(OFFLINE);
+// Add transitions between the states.
+builder.addTransition(OFFLINE, SLAVE);
+builder.addTransition(SLAVE, OFFLINE);
+builder.addTransition(SLAVE, MASTER);
+builder.addTransition(MASTER, SLAVE);
 
-    // Add transitions between the states.
-    builder.addTransition(OFFLINE, SLAVE);
-    builder.addTransition(SLAVE, OFFLINE);
-    builder.addTransition(SLAVE, MASTER);
-    builder.addTransition(MASTER, SLAVE);
+// set constraints on states
 
-    // set constraints on states.
+// static constraint: upper bound of 1 MASTER
+builder.upperBound(MASTER, 1);
 
-    // static constraint: upper bound of 1 MASTER
-    builder.upperBound(MASTER, 1);
+// dynamic constraint: R means it should be derived based on the replication factor for the cluster
+// this allows a different replication factor for each resource without
+// having to define a new state model
 
-    // dynamic constraint: R means it should be derived based on the replication factor for the cluster
-    // this allows a different replication factor for each resource without 
-    // having to define a new state model
-    //
-    builder.dynamicUpperBound(SLAVE, "R");
+builder.dynamicUpperBound(SLAVE, "R");
 
-    StateModelDefinition statemodelDefinition = builder.build();
-    admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, myStateModel);
+StateModelDefinition statemodelDefinition = builder.build();
+admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, myStateModel);
 ```
 
-#### Assigning partitions to nodes
+##### Assigning Partitions to Nodes
 
-The final goal of Helix is to ensure that the constraints on the state model are satisfied. 
-Helix does this by assigning a STATE to a partition (such as MASTER, SLAVE), and placing it on a particular node.
+The final goal of Helix is to ensure that the constraints on the state model are satisfied.
+Helix does this by assigning a __state__ to a partition (such as MASTER, SLAVE), and placing it on a particular node.
 
-There are 3 assignment modes Helix can operate on
+There are 3 assignment modes Helix can operate in:
 
 * FULL_AUTO: Helix decides the placement and state of a partition.
 * SEMI_AUTO: Application decides the placement but Helix decides the state of a partition.
 * CUSTOMIZED: Application controls the placement and state of a partition.
 
-For more info on the assignment modes, see [Rebalancing Algorithms](./tutorial_rebalance.html) section of the tutorial.
+For more information on the assignment modes, see the [Rebalancing Algorithms](./tutorial_rebalance.html) section of this tutorial.
 
 ```
-    String RESOURCE_NAME = "MyDB";
-    int NUM_PARTITIONS = 6;
-    STATE_MODEL_NAME = "MasterSlave";
-    String MODE = "SEMI_AUTO";
-    int NUM_REPLICAS = 2;
-
-    admin.addResource(CLUSTER_NAME, RESOURCE_NAME, NUM_PARTITIONS, STATE_MODEL_NAME, MODE);
-    admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
+String RESOURCE_NAME = "MyDB";
+int NUM_PARTITIONS = 6;
+STATE_MODEL_NAME = "MasterSlave";
+String MODE = "SEMI_AUTO";
+int NUM_REPLICAS = 2;
+
+admin.addResource(CLUSTER_NAME, RESOURCE_NAME, NUM_PARTITIONS, STATE_MODEL_NAME, MODE);
+admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
 ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/index.md b/site-releases/0.6.2-incubating/src/site/markdown/index.md
index a09a70d..2214ff4 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/index.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/index.md
@@ -18,21 +18,18 @@ under the License.
 -->
 
 <head>
-  <title>Home</title>
+  <title>Helix 0.6.2-incubating Documentation</title>
 </head>
 
-Navigating the Documentation
-----------------------------
+### Get Helix
 
-### Conceptual Understanding
+[Download](./download.html)
 
-[Concepts / Terminology](./Concepts.html)
+[Building](./Building.html)
 
-[Architecture](./Architecture.html)
+[Release Notes](./releasenotes/release-0.6.2-incubating.html)
 
-### Hands-on Helix
-
-[Getting Helix](./Building.html)
+### Hands-On
 
 [Quickstart](./Quickstart.html)
 
@@ -50,9 +47,5 @@ Navigating the Documentation
 
 [Service discovery](./recipes/service_discovery.html)
 
-[Distributed Task DAG Execution](./recipes/task_dag_execution.html)
-
-### Download
-
-[0.6.2-incubating](./download.html)
+[Distributed task DAG execution](./recipes/task_dag_execution.html)
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
index 252ace7..5cf30f1 100644
--- a/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
@@ -16,21 +16,21 @@ KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
 -->
-Distributed lock manager
+Distributed Lock Manager
 ------------------------
-Distributed locks are used to synchronize accesses shared resources. Most applications use Zookeeper to model the distributed locks. 
+Distributed locks are used to synchronize accesses shared resources. Most applications today use ZooKeeper to model distributed locks.
 
-The simplest way to model a lock using zookeeper is (See Zookeeper leader recipe for an exact and more advanced solution)
+The simplest way to model a lock using ZooKeeper is (See ZooKeeper leader recipe for an exact and more advanced solution)
 
-* Each process tries to create an emphemeral node.
-* If can successfully create it then, it acquires the lock
-* Else it will watch on the znode and try to acquire the lock again if the current lock holder disappears 
+* Each process tries to create an emphemeral node
+* If the node is successfully created, the process acquires the lock
+* Otherwise, it will watch the ZNode and try to acquire the lock again if the current lock holder disappears
 
-This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in
+This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in:
 
-* Uneven distribution of locks among nodes, the node that starts first will acquire all the lock. Nodes that start later will be idle.
-* When a node fails, how the locks will be distributed among remaining nodes is not predicable. 
-* When new nodes are added the current nodes dont relinquish the locks so that new nodes can acquire some locks
+* Uneven distribution of locks among nodes; the node that starts first will acquire all the locks. Nodes that start later will be idle.
+* When a node fails, how the locks will be distributed among remaining nodes is not predicable.
+* When new nodes are added the current nodes don\'t relinquish the locks so that new nodes can acquire some locks
 
 In other words we want a system to satisfy the following requirements.
 
@@ -38,28 +38,29 @@ In other words we want a system to satisfy the following requirements.
 * If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
 * If nodes are added, locks must be evenly re-distributed among nodes.
 
-Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied. 
+Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied.
 
-To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
+To quickly see this working run the `lock-manager-demo` script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
 
 ----------------------------------------------------------------------------------------
 
-#### Short version
- This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
- 
+### Short Version
+This version starts multiple threads within the same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
+
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
 cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
 mvn clean install package -DskipTests
 cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin
 chmod +x *
 ./lock-manager-demo
 ```
 
-##### Output
+#### Output
 
 ```
-./lock-manager-demo 
+./lock-manager-demo
 STARTING localhost_12000
 STARTING localhost_12002
 STARTING localhost_12001
@@ -117,83 +118,74 @@ lock-group_9    localhost_12001
 
 ----------------------------------------------------------------------------------------
 
-#### Long version
+### Long version
 This provides more details on how to setup the cluster and where to plugin application code.
 
-##### start zookeeper
+#### Start ZooKeeper
 
 ```
 ./start-standalone-zookeeper 2199
 ```
 
-##### Create a cluster
+#### Create a Cluster
 
 ```
 ./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
 ```
 
-##### Create a lock group
+#### Create a Lock Group
 
-Create a lock group and specify the number of locks in the lock group. 
+Create a lock group and specify the number of locks in the lock group.
 
 ```
-./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline FULL_AUTO
+./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline AUTO_REBALANCE
 ```
 
-##### Start the nodes
+#### Start the Nodes
 
-Create a Lock class that handles the callbacks. 
+Create a Lock class that handles the callbacks.
 
 ```
-
-public class Lock extends StateModel
-{
+public class Lock extends StateModel {
   private String lockName;
 
-  public Lock(String lockName)
-  {
+  public Lock(String lockName) {
     this.lockName = lockName;
   }
 
-  public void lock(Message m, NotificationContext context)
-  {
+  public void lock(Message m, NotificationContext context) {
     System.out.println(" acquired lock:"+ lockName );
   }
 
-  public void release(Message m, NotificationContext context)
-  {
+  public void release(Message m, NotificationContext context) {
     System.out.println(" releasing lock:"+ lockName );
   }
 
 }
-
 ```
 
-LockFactory that creates the lock
- 
+and a LockFactory that creates Locks
+
 ```
-public class LockFactory extends StateModelFactory<Lock>{
-    
-    /* Instantiates the lock handler, one per lockName*/
-    public Lock create(String lockName)
-    {
+public class LockFactory extends StateModelFactory<Lock> {
+    /* Instantiates the lock handler, one per lockName */
+    public Lock create(String lockName) {
         return new Lock(lockName);
-    }   
+    }
 }
 ```
 
-At node start up, simply join the cluster and helix will invoke the appropriate callbacks on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
+At node start up, simply join the cluster and Helix will invoke the appropriate callbacks on the appropriate Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
 
 ```
-public class LockProcess{
-
-  public static void main(String args){
+public class LockProcess {
+  public static void main(String args) {
     String zkAddress= "localhost:2199";
     String clusterName = "lock-manager-demo";
     //Give a unique id to each process, most commonly used format hostname_port
     String instanceName ="localhost_12000";
     ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
-    //configure the instance and provide some metadata 
+    //configure the instance and provide some metadata
     InstanceConfig config = new InstanceConfig(instanceName);
     config.setHostName("localhost");
     config.setPort("12000");
@@ -207,47 +199,38 @@ public class LockProcess{
     manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
     manager.connect();
     Thread.currentThread.join();
-    }
-
+  }
 }
 ```
 
-##### Start the controller
+#### Start the Controller
 
-Controller can be started either as a separate process or can be embedded within each node process
+The controller can be started either as a separate process or can be embedded within each node process
 
-###### Separate process
-This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes.
+##### Separate Process
+This is recommended when number of nodes in the cluster \> 100. For fault tolerance, you can run multiple controllers on different boxes.
 
 ```
 ./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
 ```
 
-###### Embedded within the node process
+##### Embedded Within the Node Process
 This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass
 
 ```
-public class LockProcess{
-
-  public static void main(String args){
+public class LockProcess {
+  public static void main(String args) {
     String zkAddress= "localhost:2199";
     String clusterName = "lock-manager-demo";
-    .
-    .
+    // .
+    // .
     manager.connect();
     HelixManager controller;
-    controller = HelixControllerMain.startHelixController(zkAddress, 
+    controller = HelixControllerMain.startHelixController(zkAddress,
                                                           clusterName,
-                                                          "controller", 
+                                                          "controller",
                                                           HelixControllerMain.STANDALONE);
     Thread.currentThread.join();
   }
 }
 ```
-
-----------------------------------------------------------------------------------------
-
-
-
-
-