You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by ka...@apache.org on 2014/01/02 01:14:02 UTC

[02/31] Redesign documentation for 0.6.2, 0.7.0, and trunk

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/service_discovery.md b/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
index 8e06ead..5338d82 100644
--- a/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
+++ b/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
@@ -19,73 +19,67 @@ under the License.
 Service Discovery
 -----------------
 
-One of the common usage of zookeeper is enable service discovery. 
-The basic idea is that when a server starts up it advertises its configuration/metadata such as host name port etc on zookeeper. 
-This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and 
-is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to vips.
+One of the common usage of ZooKeeper is to enable service discovery.
+The basic idea is that when a server starts up it advertises its configuration/metadata such as its hostname and port on ZooKeeper.
+This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and
+is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to VIPs.
 
-The core idea behind this is to use zookeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a znode. 
-When the server shutdowns, zookeeper automatically removes this znode. 
+The core idea behind this is to use ZooKeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a ZNode.
+When the server shutdowns, ZooKeeper automatically removes this ZNode.
 
-There are two ways the clients can dynamically discover the active servers
+There are two ways the clients can dynamically discover the active servers:
 
-#### ZOOKEEPER WATCH
+### ZooKeeper Watch
 
-Clients can set a child watch under specific path on zookeeper. 
-When a new service is registered/deregistered, zookeeper notifies the client via watchevent and the client can read the list of services. Even though this looks trivial, 
-there are lot of things one needs to keep in mind like ensuring that you first set the watch back on zookeeper before reading data from zookeeper.
+Clients can set a child watch under specific path on ZooKeeper.
+When a new service is registered/deregistered, ZooKeeper notifies the client via a watch event and the client can read the list of services. Even though this looks trivial,
+there are lot of things one needs to keep in mind like ensuring that you first set the watch back on ZooKeeper before reading data.
 
 
-#### POLL
+### Poll
 
-Another approach is for the client to periodically read the zookeeper path and get the list of services.
+Another approach is for the client to periodically read the ZooKeeper path and get the list of services.
 
+Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is problematic, especially when servers are starting up.
+But the advantage to setting watches is that clients are immediately notified of a change which is not true in case of polling.
+In some cases, having both watches and polls makes sense; watch allows one to get notifications as soon as possible while poll provides a safety net if a watch event is missed because of code bug or ZooKeeper fails to notify.
 
-Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is worst especially when servers are starting up. 
-But good thing about setting watch is that clients are immediately notified of a change which is not true in case of polling. 
-In some cases, having both WATCH and POLL makes sense, WATCH allows one to get notifications as soon as possible while POLL provides a safety net if a watch event is missed because of code bug or zookeeper fails to notify.
+### Other Developer Considerations
+* What happens when the ZooKeeper session expires? All the watches and ephemeral nodes previously added or created by this server are lost. One needs to add the watches again, recreate the ephemeral nodes, and so on.
+* Due to network issues or Java GC pauses session expiry might happen again and again; this phenomenon is known as flapping. It\'s important for the server to detect this and deregister itself.
 
-##### Other important scenarios to take care of
-* What happens when zookeeper session expires. All the watches/ephemeral nodes previously added/created by this server are lost. 
-One needs to add the watches again , recreate the ephemeral nodes etc.
-* Due to network issues or java GC pauses session expiry might happen again and again also known as flapping. Its important for the server to detect this and deregister itself.
+### Other Operational Considerations
+* What if the node is behaving badly? One might kill the server, but it will lose the ability to debug. It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
 
-##### Other operational things to consider
-* What if the node is behaving badly, one might kill the server but will lose the ability to debug. 
-It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
- 
-#### Configuration ownership
+### Configuration Ownership
 
-This is an important aspect that is often ignored in the initial stages of your development. In common, service discovery pattern means that servers start up with some configuration and then simply puts its configuration/metadata in zookeeper. While this works well in the beginning, 
-configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting of the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server. 
+This is an important aspect that is often ignored in the initial stages of your development. Typically, the service discovery pattern means that servers start up with some configuration which it simply puts into ZooKeeper. While this works well in the beginning, configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server.
 
-Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from zookeeper.
+Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from ZooKeeper.
 
-h3. How to use Helix to achieve this
+### Using Helix for Service Discovery
 
-Even though Helix has higher level abstraction in terms of statemachine, constraints and objectives, 
-service discovery is one of things that existed since we started. 
-The controller uses the exact mechanism we described above to discover when new servers join the cluster.
-We create these znodes under /CLUSTERNAME/LIVEINSTANCES. 
-Since at any time there is only one controller, we use ZK watch to track the liveness of a server.
+Even though Helix has a higher-level abstraction in terms of state machines, constraints and objectives, service discovery is one of things has been a prevalent use case from the start.
+The controller uses the exact mechanism we described above to discover when new servers join the cluster. We create these ZNodes under /CLUSTERNAME/LIVEINSTANCES.
+Since at any time there is only one controller, we use a ZK watch to track the liveness of a server.
 
-This recipe, simply demonstrate how one can re-use that part for implementing service discovery. This demonstrates multiple MODE's of service discovery
+This recipe simply demonstrates how one can re-use that part for implementing service discovery. This demonstrates multiple modes of service discovery:
 
 * POLL: The client reads from zookeeper at regular intervals 30 seconds. Use this if you have 100's of clients
-* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients.
-* NONE: This does neither of the above, but reads directly from zookeeper when ever needed.
+* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients
+* NONE: This does neither of the above, but reads directly from zookeeper when ever needed
 
-Helix provides these additional features compared to other implementations available else where
+Helix provides these additional features compared to other implementations available elsewhere:
 
-* It has the concept of disabling a node which means that a badly behaving node, can be disabled using helix admin api.
-* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node.
-* Configuration management  
-    * Allows one to set configuration via admin api at various granulaties like cluster, instance, resource, partition 
-    * Configuration can be dynamically changed.
-    * Notifies the server when configuration changes.
+* It has the concept of disabling a node which means that a badly behaving node can be disabled using the Helix admin API
+* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node
+* Configuration management
+    * Allows one to set configuration via the admin API at various granulaties like cluster, instance, resource, partition
+    * Configurations can be dynamically changed
+    * The server is notified when configurations change
 
 
-##### checkout and build
+### Checkout and Build
 
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
@@ -95,19 +89,19 @@ cd recipes/service-discovery/target/service-discovery-pkg/bin
 chmod +x *
 ```
 
-##### start zookeeper
+### Start ZooKeeper
 
 ```
 ./start-standalone-zookeeper 2199
 ```
 
-#### Run the demo
+### Run the Demo
 
 ```
 ./service-discovery-demo.sh
 ```
 
-#### Output
+### Output
 
 ```
 START:Service discovery demo mode:WATCH
@@ -186,6 +180,4 @@ START:Service discovery demo mode:NONE
 	Registering service:host.x.y.z_12000
 END:Service discovery demo mode:NONE
 =============================================
-
 ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md b/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
index f0474e4..32faa1f 100644
--- a/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
+++ b/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
@@ -17,20 +17,18 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-# Distributed task execution
+Distributed Task Execution
+--------------------------
 
-
-This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with desired parallelism and will start only when up-stream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of 
-re-assigning a failed task partition to a new worker. 
+This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with the desired amount of parallelism and will start only when upstream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of re-assigning a failed task partition to a new worker.
 
 Redis is used as a result store. Any other suitable implementation for TaskResultStore can be plugged in.
 
-### Workflow 
-
+### Workflow
 
-#### Input 
+#### Input
 
-10000 impression events and around 100 click events are pre-populated in task result store (redis). 
+10000 impression events and around 100 click events are pre-populated in task result store (redis).
 
 * **ImpEvent**: format: id,isFraudulent,country,gender
 
@@ -55,45 +53,44 @@ Redis is used as a result store. Any other suitable implementation for TaskResul
 + **report**: Reads from all aggregates generated by previous stages and prints them. Depends on: **impCountsByGender, impCountsByCountry, clickCountsByGender,clickCountsByGender**
 
 
-### Creating DAG
+### Creating a DAG
 
-Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modelled as a resource in Helix using OnlineOffline state model. As part of Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to online state. See Task.java for additional info.
+Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modeled as a resource in Helix using OnlineOffline state model. As part of an Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to the online state. See Task.java for additional info.
 
 ```
-
-  Dag dag = new Dag();
-  dag.addNode(new Node("filterImps", 10, ""));
-  dag.addNode(new Node("filterClicks", 5, ""));
-  dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
-  dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
-  dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
-  dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
-  dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));		
-  dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
-
-
+Dag dag = new Dag();
+dag.addNode(new Node("filterImps", 10, ""));
+dag.addNode(new Node("filterClicks", 5, ""));
+dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
+dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
+dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
+dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
+dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));
+dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
 ```
 
-### DEMO
+### Demo
 
 In order to run the demo, use the following steps
 
 See http://redis.io/topics/quickstart on how to install redis server
 
 ```
-
 Start redis e.g:
 ./redis-server --port 6379
 
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
 cd recipes/task-execution
 mvn clean install package -DskipTests
 cd target/task-execution-pkg/bin
 chmod +x task-execution-demo.sh
-./task-execution-demo.sh 2181 localhost 6379 
+./task-execution-demo.sh 2181 localhost 6379
 
 ```
 
+Here\'s a visual representation of the DAG.
+
 ```
 
 
@@ -130,7 +127,7 @@ chmod +x task-execution-demo.sh
 
 (credit for above ascii art: http://www.asciiflow.com)
 
-### OUTPUT
+#### Output
 
 ```
 Done populating dummy data
@@ -198,7 +195,4 @@ Impression counts per gender
 {F=3325, UNKNOWN=3259, M=3296}
 Click counts per gender
 {F=33, UNKNOWN=32, M=35}
-
-
 ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md b/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
index 68fd954..1202724 100644
--- a/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
+++ b/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
@@ -16,17 +16,18 @@ KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
 -->
+
 Lock Manager with a User-Defined Rebalancer
 -------------------------------------------
 Helix is able to compute node preferences and state assignments automatically using general-purpose algorithms. In many cases, a distributed system implementer may choose to instead define a customized approach to computing the location of replicas, the state mapping, or both in response to the addition or removal of participants. The following is an implementation of the [Distributed Lock Manager](./lock_manager.html) that includes a user-defined rebalancer.
 
-### Define the cluster and locks
+### Define the Cluster and Resource
 
 The YAML file below fully defines the cluster and the locks. A lock can be in one of two states: locked and unlocked. Transitions can happen in either direction, and the locked is preferred. A resource in this example is the entire collection of locks to distribute. A partition is mapped to a lock; in this case that means there are 12 locks. These 12 locks will be distributed across 3 nodes. The constraints indicate that only one replica of a lock can be in the locked state at any given time. These locks can each only have a single holder, defined by a replica count of 1.
 
-Notice the rebalancer section of the definition. The mode is set to USER_DEFINED and the class name refers to the plugged-in rebalancer implementation that inherits from [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). This implementation is called whenever the state of the cluster changes, as is the case when participants are added or removed from the system.
+Notice the rebalancer section of the definition. The mode is set to USER_DEFINED and the class name refers to the plugged-in rebalancer implementation that inherits from [HelixRebalancer](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). This implementation is called whenever the state of the cluster changes, as is the case when participants are added or removed from the system.
 
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/resources/lock-manager-config.yaml
+Location: `incubator-helix/recipes/user-defined-rebalancer/src/main/resources/lock-manager-config.yaml`
 
 ```
 clusterName: lock-manager-custom-rebalancer # unique name for the cluster
@@ -92,28 +93,32 @@ InputStream input =
 YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
 ```
 
-### Write a rebalancer
-Below is a full implementation of a rebalancer that extends [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). In this case, it simply throws out the previous resource assignment, computes the target node for as many partition replicas as can hold a lock in the LOCKED state (in this example, one), and assigns them the LOCKED state (which is at the head of the state preference list). Clearly a more robust implementation would likely examine the current ideal state to maintain current assignments, and the full state list to handle models more complicated than this one. However, for a simple lock holder implementation, this is sufficient.
+### Write a Rebalancer
+Below is a full implementation of a rebalancer that extends [HelixRebalancer](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). In this case, it simply throws out the previous resource assignment, computes the target node for as many partition replicas as can hold a lock in the LOCKED state (in this example, one), and assigns them the LOCKED state (which is at the head of the state preference list). Clearly a more robust implementation would likely examine the current ideal state to maintain current assignments, and the full state list to handle models more complicated than this one. However, for a simple lock holder implementation, this is sufficient.
 
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockManagerRebalancer.java
+Location: `incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockManagerRebalancer.java`
 
 ```
 @Override
-public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
-    ResourceCurrentState currentState) {
-  // Get the rebalcancer context (a basic partitioned one)
-  PartitionedRebalancerContext context = rebalancerConfig.getRebalancerContext(
-      PartitionedRebalancerContext.class);
+public void init(HelixManager manager, ControllerContextProvider contextProvider) {
+  // do nothing; this rebalancer is independent of the manager
+}
+
+@Override
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig,
+    ResourceAssignment prevAssignment, Cluster cluster, ResourceCurrentState currentState) {
+  // get a typed config
+  PartitionedRebalancerConfig config = PartitionedRebalancerConfig.from(rebalancerConfig);
 
   // Initialize an empty mapping of locks to participants
-  ResourceAssignment assignment = new ResourceAssignment(context.getResourceId());
+  ResourceAssignment assignment = new ResourceAssignment(config.getResourceId());
 
   // Get the list of live participants in the cluster
-  List<ParticipantId> liveParticipants = new ArrayList<ParticipantId>(
-      cluster.getLiveParticipantMap().keySet());
+  List<ParticipantId> liveParticipants =
+      new ArrayList<ParticipantId>(cluster.getLiveParticipantMap().keySet());
 
   // Get the state model (should be a simple lock/unlock model) and the highest-priority state
-  StateModelDefId stateModelDefId = context.getStateModelDefId();
+  StateModelDefId stateModelDefId = config.getStateModelDefId();
   StateModelDefinition stateModelDef = cluster.getStateModelMap().get(stateModelDefId);
   if (stateModelDef.getStatesPriorityList().size() < 1) {
     LOG.error("Invalid state model definition. There should be at least one state.");
@@ -139,7 +144,7 @@ public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConf
   // This assumes a simple lock-unlock model where the only state of interest is which nodes have
   // acquired each lock.
   int i = 0;
-  for (PartitionId partition : context.getPartitionSet()) {
+  for (PartitionId partition : config.getPartitionSet()) {
     Map<ParticipantId, State> replicaMap = new HashMap<ParticipantId, State>();
     for (int j = i; j < i + lockHolders; j++) {
       int participantIndex = j % liveParticipants.size();
@@ -156,10 +161,10 @@ public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConf
 }
 ```
 
-### Start up the participants
+### Start up the Participants
 Here is a lock class based on the newly defined lock-unlock state model so that the participant can receive callbacks on state transitions.
 
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/Lock.java
+Location: `incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/Lock.java`
 
 ```
 public class Lock extends StateModel {
@@ -183,7 +188,7 @@ public class Lock extends StateModel {
 
 Here is the factory to make the Lock class accessible.
 
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockFactory.java
+Location: `incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockFactory.java`
 
 ```
 public class LockFactory extends StateModelFactory<Lock> {
@@ -205,7 +210,7 @@ participantManager.getStateMachineEngine().registerStateModelFactory(stateModelN
 participantManager.connect();
 ```
 
-### Start up the controller
+### Start up the Controller
 
 ```
 controllerManager =
@@ -213,13 +218,13 @@ controllerManager =
         HelixControllerMain.STANDALONE);
 ```
 
-### Try it out
-#### Building 
+### Try It Out
+
 ```
 git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
 cd incubator-helix
 mvn clean install package -DskipTests
-cd recipes/user-rebalanced-lock-manager/target/user-rebalanced-lock-manager-pkg/bin
+cd recipes/user-defined-rebalancer/target/user-defined-rebalancer-pkg/bin
 chmod +x *
 ./lock-manager-demo.sh
 ```
@@ -227,7 +232,7 @@ chmod +x *
 #### Output
 
 ```
-./lock-manager-demo 
+./lock-manager-demo
 STARTING localhost_12002
 STARTING localhost_12001
 STARTING localhost_12003
@@ -282,4 +287,4 @@ lock-group_8  localhost_12002
 lock-group_9  localhost_12002
 ```
 
-Notice that the lock assignment directly follows the assignment generated by the user-defined rebalancer both initially and after a participant is removed from the system.
\ No newline at end of file
+Notice that the lock assignment directly follows the assignment generated by the user-defined rebalancer both initially and after a participant is removed from the system.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_accessors.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_accessors.md b/site-releases/trunk/src/site/markdown/tutorial_accessors.md
index bde50d2..60b698f 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_accessors.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_accessors.md
@@ -21,7 +21,7 @@ under the License.
   <title>Tutorial - Logical Accessors</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Logical Accessors
+## [Helix Tutorial](./Tutorial.html): Logical Accessors
 
 Helix constructs follow a logical hierarchy. A cluster contains participants, and serve logical resources. Each resource can be divided into partitions, which themselves can be replicated. Helix now supports configuring and modifying clusters programmatically in a hierarchical way using logical accessors.
 
@@ -41,31 +41,31 @@ ParticipantConfig participantConfig = new ParticipantConfig.Builder(participantI
 
 #### Configure a Resource
 
-##### RebalancerContext
-A Resource is essentially a combination of a RebalancerContext and a UserConfig. A [RebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/RebalancerContext.html) consists of all the key properties required to rebalance a resource, including how it is partitioned and replicated, and what state model it follows. Most Helix resources will make use of a [PartitionedRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/PartitionedRebalancerContext.html), which is a RebalancerContext for resources that are partitioned.
+##### RebalancerConfig
+A Resource is essentially a combination of a RebalancerConfig and a UserConfig. A [RebalancerConfig](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/config/RebalancerConfig.html) consists of all the key properties required to rebalance a resource, including how it is partitioned and replicated, and what state model it follows. Most Helix resources will make use of a [PartitionedRebalancerConfig](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/config/PartitionedRebalancerConfig.html), which is a RebalancerConfig for resources that are partitioned.
 
-Recall that there are four [rebalancing modes](./tutorial_rebalance.html) that Helix provides, and so Helix also provides the following subclasses for PartitionedRebalancerContext:
+Recall that there are four [rebalancing modes](./tutorial_rebalance.html) that Helix provides, and so Helix also provides the following subclasses for PartitionedRebalancerConfig:
 
-* [FullAutoRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/FullAutoRebalancerContext.html) for FULL_AUTO mode.
-* [SemiAutoRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/SemiAutoRebalancerContext.html) for SEMI_AUTO mode. This class allows a user to specify "preference lists" to indicate where each partition should ideally be served
-* [CustomRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/CustomRebalancerContext.html) for CUSTOMIZED mode. This class allows a user tp specify "preference maps" to indicate the location and state for each partition replica.
+* [FullAutoRebalancerConfig](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/config/FullAutoRebalancerConfig.html) for FULL_AUTO mode.
+* [SemiAutoRebalancerConfig](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/config/SemiAutoRebalancerConfig.html) for SEMI_AUTO mode. This class allows a user to specify "preference lists" to indicate where each partition should ideally be served
+* [CustomRebalancerConfig](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/config/CustomRebalancerConfig.html) for CUSTOMIZED mode. This class allows a user tp specify "preference maps" to indicate the location and state for each partition replica.
 
-Helix also supports arbitrary subclasses of PartitionedRebalancerContext and even arbitrary implementations of RebalancerContext for applications that need a user-defined approach for rebalancing. For more, see [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
+Helix also supports arbitrary subclasses of PartitionedRebalancerConfig and even arbitrary implementations of RebalancerConfig for applications that need a user-defined approach for rebalancing. For more, see [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
 
 ##### In Action
 
-Here is an example of a configured resource with a rebalancer context for FULL_AUTO mode and two partitions:
+Here is an example of a configured resource with a rebalancer config for FULL_AUTO mode and two partitions:
 
 ```
 ResourceId resourceId = ResourceId.from("sampleResource");
 StateModelDefinition stateModelDef = getStateModelDef();
 Partition partition1 = new Partition(PartitionId.from(resourceId, "1"));
 Partition partition2 = new Partition(PartitionId.from(resourceId, "2"));
-FullAutoRebalancerContext rebalanceContext =
-    new FullAutoRebalancerContext.Builder(resourceId).replicaCount(1).addPartition(partition1)
+FullAutoRebalancerConfig rebalanceConfig =
+    new FullAutoRebalancerConfig.Builder(resourceId).replicaCount(1).addPartition(partition1)
         .addPartition(partition2).stateModelDefId(stateModelDef.getStateModelDefId()).build();
 ResourceConfig resourceConfig =
-    new ResourceConfig.Builder(resourceId).rebalancerContext(rebalanceContext).build();
+    new ResourceConfig.Builder(resourceId).rebalancerConfig(rebalanceConfig).build();
 ```
 
 #### Add the Cluster
@@ -122,4 +122,4 @@ Cluster cluster = clusterAccessor.readCluster();
 
 ### Atomic Accessors
 
-Helix also includes versions of ClusterAccessor, ParticipantAccessor, and ResourceAccessor that can complete operations atomically relative to one another. The specific semantics of the atomic operations are included in the Javadocs. These atomic classes should be used sparingly and only in cases where contention can adversely affect the correctness of a Helix-based cluster. For most deployments, this is not the case, and using these classes will cause a degradation in performance. However, the interface for all atomic accessors mirrors that of the non-atomic accessors.
\ No newline at end of file
+Helix also includes versions of ClusterAccessor, ParticipantAccessor, and ResourceAccessor that can complete operations atomically relative to one another. The specific semantics of the atomic operations are included in the Javadocs. These atomic classes should be used sparingly and only in cases where contention can adversely affect the correctness of a Helix-based cluster. For most deployments, this is not the case, and using these classes will cause a degradation in performance. However, the interface for all atomic accessors mirrors that of the non-atomic accessors.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_admin.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_admin.md b/site-releases/trunk/src/site/markdown/tutorial_admin.md
index f269a4a..98175f8 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_admin.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_admin.md
@@ -21,45 +21,45 @@ under the License.
   <title>Tutorial - Admin Operations</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Admin Operations
+## [Helix Tutorial](./Tutorial.html): Admin Operations
 
-Helix provides a set of admin api for cluster management operations. They are supported via:
+Helix provides a set of admin APIs for cluster management operations. They are supported via:
 
-* _Java API_
-* _Commandline interface_
-* _REST interface via helix-admin-webapp_
+* Java API
+* Command Line Interface
+* REST Interface via helix-admin-webapp
 
 ### Java API
 See interface [_org.apache.helix.HelixAdmin_](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/HelixAdmin.html)
 
-### Command-line interface
-The command-line tool comes with helix-core package:
+### Command Line Interface
+The command line tool comes with helix-core package:
 
-Get the command-line tool:
+Get the command line tool:
 
-``` 
-  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-  - cd incubator-helix
-  - ./build
-  - cd helix-core/target/helix-core-pkg/bin
-  - chmod +x *.sh
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+./build
+cd helix-core/target/helix-core-pkg/bin
+chmod +x *.sh
 ```
 
 Get help:
 
 ```
-  - ./helix-admin.sh --help
+./helix-admin.sh --help
 ```
 
 All other commands have this form:
 
 ```
-  ./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
+./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
 ```
 
-Admin commands and brief description:
+#### Supported Commands
 
-| Command syntax | Description |
+| Command Syntax | Description |
 | -------------- | ----------- |
 | _\-\-activateCluster \<clusterName controllerCluster true/false\>_ | Enable/disable a cluster in distributed controller mode |
 | _\-\-addCluster \<clusterName\>_ | Add a new cluster |
@@ -102,17 +102,17 @@ Admin commands and brief description:
 | _\-\-swapInstance \<clusterName oldInstance newInstance\>_ | Swap an old instance with a new instance |
 | _\-\-zkSvr \<ZookeeperServerAddress\>_ | Provide zookeeper address |
 
-### REST interface
+### REST Interface
 
 The REST interface comes wit helix-admin-webapp package:
 
-``` 
-  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-  - cd incubator-helix 
-  - ./build
-  - cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
-  - chmod +x *.sh
-  - ./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure zookeeper is running
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+./build
+cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
+chmod +x *.sh
+./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure ZooKeeper is running
 ```
 
 #### URL and support methods
@@ -121,75 +121,75 @@ The REST interface comes wit helix-admin-webapp package:
     * List all clusters
 
     ```
-      curl http://localhost:8100/clusters
+    curl http://localhost:8100/clusters
     ```
 
     * Add a cluster
-    
+
     ```
-      curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
+    curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
     ```
 
 * _/clusters/{clusterName}_
     * List cluster information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster
+    curl http://localhost:8100/clusters/MyCluster
     ```
 
     * Enable/disable a cluster in distributed controller mode
-    
+
     ```
-      curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
+    curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
     ```
 
     * Remove a cluster
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster
+    curl -X DELETE http://localhost:8100/clusters/MyCluster
     ```
-    
+
 * _/clusters/{clusterName}/resourceGroups_
     * List all resources in a cluster
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups
+    curl http://localhost:8100/clusters/MyCluster/resourceGroups
     ```
-    
+
     * Add a resource to cluster
-    
+
     ```
-      curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
+    curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
     ```
 
 * _/clusters/{clusterName}/resourceGroups/{resourceName}_
     * List resource information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
     ```
-    
+
     * Drop a resource
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
     ```
 
     * Reset all erroneous partitions of a resource
-    
+
     ```
-      curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
     ```
 
 * _/clusters/{clusterName}/resourceGroups/{resourceName}/idealState_
     * Rebalance a resource
-    
+
     ```
-      curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
     ```
 
     * Add an ideal state
-    
+
     ```
     echo jsonParameters={
     "command":"addIdealState"
@@ -215,193 +215,192 @@ The REST interface comes wit helix-admin-webapp package:
     > newIdealState.json
     curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
     ```
-    
+
     * Add resource property
-    
+
     ```
-      curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
     ```
-    
+
 * _/clusters/{clusterName}/resourceGroups/{resourceName}/externalView_
     * Show resource external view
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
+    curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
     ```
 * _/clusters/{clusterName}/instances_
     * List all instances
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/instances
+    curl http://localhost:8100/clusters/MyCluster/instances
     ```
 
     * Add an instance
-    
+
     ```
     curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
     ```
-    
+
     * Swap an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
     ```
 * _/clusters/{clusterName}/instances/{instanceName}_
     * Show instance information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
-    
+
     * Enable/disable an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
 
     * Drop an instance
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
-    
+
     * Disable/enable partitions on an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
-    
+
     * Reset an erroneous partition on an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
 
     * Reset all erroneous partitions on an instance
-    
+
     ```
-      curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
     ```
 
 * _/clusters/{clusterName}/configs_
     * Get user cluster level config
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/configs/cluster
+    curl http://localhost:8100/clusters/MyCluster/configs/cluster
     ```
-    
+
     * Set user cluster level config
-    
+
     ```
-      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
     ```
 
     * Remove user cluster level config
-    
+
     ```
     curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
     ```
-    
+
     * Get/set/remove user participant level config
-    
+
     ```
-      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
     ```
-    
+
     * Get/set/remove resource level config
-    
+
     ```
     curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
     ```
 
 * _/clusters/{clusterName}/controller_
     * Show controller information
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/Controller
+    curl http://localhost:8100/clusters/MyCluster/Controller
     ```
-    
+
     * Enable/disable cluster
-    
+
     ```
-      curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
+    curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
     ```
 
 * _/zkPath/{path}_
     * Get information for zookeeper path
-    
+
     ```
-      curl http://localhost:8100/zkPath/MyCluster
+    curl http://localhost:8100/zkPath/MyCluster
     ```
 
 * _/clusters/{clusterName}/StateModelDefs_
     * Show all state model definitions
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/StateModelDefs
+    curl http://localhost:8100/clusters/MyCluster/StateModelDefs
     ```
 
     * Add a state mdoel definition
-    
-    ```
-      echo jsonParameters={
-        "command":"addStateModelDef"
-       }&newStateModelDef={
-          "id" : "OnlineOffline",
-          "simpleFields" : {
-            "INITIAL_STATE" : "OFFLINE"
-          },
-          "listFields" : {
-            "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
-            "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
-          },
-          "mapFields" : {
-            "DROPPED.meta" : {
-              "count" : "-1"
-            },
-            "OFFLINE.meta" : {
-              "count" : "-1"
-            },
-            "OFFLINE.next" : {
-              "DROPPED" : "DROPPED",
-              "ONLINE" : "ONLINE"
-            },
-            "ONLINE.meta" : {
-              "count" : "R"
-            },
-            "ONLINE.next" : {
-              "DROPPED" : "OFFLINE",
-              "OFFLINE" : "OFFLINE"
-            }
-          }
+
+    ```
+    echo jsonParameters={
+      "command":"addStateModelDef"
+    }&newStateModelDef={
+      "id" : "OnlineOffline",
+      "simpleFields" : {
+        "INITIAL_STATE" : "OFFLINE"
+      },
+      "listFields" : {
+        "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
+        "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
+      },
+      "mapFields" : {
+        "DROPPED.meta" : {
+          "count" : "-1"
+        },
+        "OFFLINE.meta" : {
+          "count" : "-1"
+        },
+        "OFFLINE.next" : {
+          "DROPPED" : "DROPPED",
+          "ONLINE" : "ONLINE"
+        },
+        "ONLINE.meta" : {
+          "count" : "R"
+        },
+        "ONLINE.next" : {
+          "DROPPED" : "OFFLINE",
+          "OFFLINE" : "OFFLINE"
         }
-        > newStateModelDef.json
-        curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
+      }
+    }
+    > newStateModelDef.json
+    curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
     ```
 
 * _/clusters/{clusterName}/StateModelDefs/{stateModelDefName}_
     * Show a state model definition
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
+    curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
     ```
 
 * _/clusters/{clusterName}/constraints/{constraintType}_
     * Show all contraints
-    
+
     ```
-      curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
+    curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
     ```
 
     * Set a contraint
-    
+
     ```
-       curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
     ```
-    
+
     * Remove a constraint
-    
+
     ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
     ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_controller.md b/site-releases/trunk/src/site/markdown/tutorial_controller.md
index 1a4cc45..0957c39 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_controller.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_controller.md
@@ -21,18 +21,17 @@ under the License.
   <title>Tutorial - Controller</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Controller
+## [Helix Tutorial](./Tutorial.html): Controller
 
 Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
 
-### Start the Helix Agent
-
+### Start the Helix Controller
 
 It requires the following parameters:
- 
+
 * clusterId: A logical ID to represent the group of nodes
 * controllerId: A logical ID of the process creating the controller instance. Generally this is host:port.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
 
 ```
 HelixConnection connection = new ZKHelixConnection(zkConnectString);
@@ -50,13 +49,13 @@ HelixController controller = connection.createController(clusterId, controllerId
 controller.startAsync();
 ```
 The snippet above shows how the controller is started. You can also start the controller using command line interface.
-  
+
 ```
 cd helix/helix-core/target/helix-core-pkg/bin
 ./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
 ```
 
-### Controller deployment modes
+### Controller Deployment Modes
 
 Helix provides multiple options to deploy the controller.
 
@@ -72,7 +71,7 @@ If setting up a separate controller process is not viable, then it is possible t
 
 #### CONTROLLER AS A SERVICE
 
-One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters. 
+One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters.
 
 For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_health.md b/site-releases/trunk/src/site/markdown/tutorial_health.md
index e1a7f3c..03b1dcc 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_health.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_health.md
@@ -21,15 +21,15 @@ under the License.
   <title>Tutorial - Customizing Heath Checks</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Customizing Health Checks
+## [Helix Tutorial](./Tutorial.html): Customizing Health Checks
 
-In this chapter, we\'ll learn how to customize the health check, based on metrics of your distributed system.  
+In this chapter, we\'ll learn how to customize health checks based on metrics of your distributed system.
 
 ### Health Checks
 
 Note: _this in currently in development mode, not yet ready for production._
 
-Helix provides the ability for each node in the system to report health metrics on a periodic basis. 
+Helix provides the ability for each node in the system to report health metrics on a periodic basis.
 
 Helix supports multiple ways to aggregate these metrics:
 
@@ -40,7 +40,7 @@ Helix supports multiple ways to aggregate these metrics:
 
 Helix persists the aggregated value only.
 
-Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert. 
+Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert.
 Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
 This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_messaging.md b/site-releases/trunk/src/site/markdown/tutorial_messaging.md
index 4bdce0e..9ed196d 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_messaging.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_messaging.md
@@ -21,51 +21,50 @@ under the License.
   <title>Tutorial - Messaging</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Messaging
+## [Helix Tutorial](./Tutorial.html): Messaging
 
-In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature which is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.  
+In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature that is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.
 
 ### Example: Bootstrapping a Replica
 
 Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
 
-Helix provides a messaging API for intra-cluster communication between nodes in the system.  Helix provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
-Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
+Helix provides a messaging API for intra-cluster communication between nodes in the system.  This API provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
+Since Helix is aware of the global state of the system, it can send the message to the appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
 
 This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
 System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
 
 ```
-      ClusterMessagingService messagingService = manager.getMessagingService();
-
-      // Construct the Message
-      Message requestBackupUriRequest = new Message(
-          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
-      requestBackupUriRequest
-          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
-      requestBackupUriRequest.setMsgState(MessageState.NEW);
-
-      // Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
-      Criteria recipientCriteria = new Criteria();
-      recipientCriteria.setInstanceName("%");
-      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
-      recipientCriteria.setResource("MyDB");
-      recipientCriteria.setPartition("");
-
-      // Should be processed only by process(es) that are active at the time of sending the message
-      //   This means if the recipient is restarted after message is sent, it will not be processe.
-      recipientCriteria.setSessionSpecific(true);
-
-      // wait for 30 seconds
-      int timeout = 30000;
-
-      // the handler that will be invoked when any recipient responds to the message.
-      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
-
-      // this will return only after all recipients respond or after timeout
-      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
-          requestBackupUriRequest, responseHandler, timeout);
+ClusterMessagingService messagingService = manager.getMessagingService();
+
+// Construct the Message
+Message requestBackupUriRequest = new Message(
+    MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+requestBackupUriRequest
+    .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+requestBackupUriRequest.setMsgState(MessageState.NEW);
+
+// Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
+Criteria recipientCriteria = new Criteria();
+recipientCriteria.setInstanceName("%");
+recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+recipientCriteria.setResource("MyDB");
+recipientCriteria.setPartition("");
+
+// Should be processed only by process(es) that are active at the time of sending the message
+// This means if the recipient is restarted after message is sent, it will not be processe.
+recipientCriteria.setSessionSpecific(true);
+
+// wait for 30 seconds
+int timeout = 30000;
+
+// the handler that will be invoked when any recipient responds to the message.
+BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+
+// this will return only after all recipients respond or after timeout
+int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+    requestBackupUriRequest, responseHandler, timeout);
 ```
 
-See HelixManager.DefaultMessagingService in [Javadocs](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
-
+See HelixManager.DefaultMessagingService in the [Javadocs](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more information.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_participant.md b/site-releases/trunk/src/site/markdown/tutorial_participant.md
index da55cbd..fcda1ec 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_participant.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_participant.md
@@ -21,28 +21,28 @@ under the License.
   <title>Tutorial - Participant</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Participant
+## [Helix Tutorial](./Tutorial.html): Participant
 
-In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
+In this chapter, we\'ll learn how to implement a __Participant__, which is a primary functional component of a distributed system.
 
 
-### Start the Helix Agent
+### Start the Helix Participant
 
-The Helix agent is a common component that connects each system component with the controller.
+The Helix participant class is a common component that connects each participant with the controller.
 
 It requires the following parameters:
- 
+
 * clusterId: A logical ID to represent the group of nodes
 * participantId: A logical ID of the process creating the manager instance. Generally this is host:port.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
 
-After the Helix participant instance is created, only thing that needs to be registered is the state model factory. 
+After the Helix participant instance is created, only thing that needs to be registered is the state model factory.
 The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
 
 * MasterSlaveStateModelFactory
 * LeaderStandbyStateModelFactory
 * BootstrapHandler
-* _An application defined state model factory_
+* _An application-defined state model factory_
 
 
 ```
@@ -50,8 +50,8 @@ HelixConnection connection = new ZKHelixConnection(zkConnectString);
 HelixParticipant participant = connection.createParticipant(clusterId, participantId);
 StateMachineEngine stateMach = participant.getStateMachineEngine();
 
-// create a stateModelFactory that returns a statemodel object for each partition. 
-StateModelFactory<StateModel> stateModelFactory = new OnlineOfflineStateModelFactory();     
+// create a stateModelFactory that returns a statemodel object for each partition.
+HelixStateModelFactory<OnlineOfflineStateModel> stateModelFactory = new OnlineOfflineStateModelFactory();
 stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
 participant.startAsync();
 ```
@@ -59,38 +59,37 @@ participant.startAsync();
 Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
 
 ```
-public class OnlineOfflineStateModelFactory extends StateModelFactory<StateModel> {
+public class OnlineOfflineStateModelFactory extends HelixStateModelFactory<OnlineOfflineStateModel> {
   @Override
-  public StateModel createNewStateModel(String stateUnitKey) {
+  public OnlineOfflineStateModel createNewStateModel(PartitionId partitionId) {
     OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
     return stateModel;
   }
-  @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
-  public static class OnlineOfflineStateModel extends StateModel {
-
-    @Transition(from = "OFFLINE", to = "ONLINE")
-    public void onBecomeOnlineFromOffline(Message message,
-        NotificationContext context) {
+}
 
-      System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
+@StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
+public static class OnlineOfflineStateModel extends StateModel {
+  @Transition(from = "OFFLINE", to = "ONLINE")
+  public void onBecomeOnlineFromOffline(Message message,
+      NotificationContext context) {
 
-      ////////////////////////////////////////////////////////////////////////////////////////////////
-      // Application logic to handle transition                                                     //
-      // For example, you might start a service, run initialization, etc                            //
-      ////////////////////////////////////////////////////////////////////////////////////////////////
-    }
+    System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
 
-    @Transition(from = "ONLINE", to = "OFFLINE")
-    public void onBecomeOfflineFromOnline(Message message,
-        NotificationContext context) {
+    ////////////////////////////////////////////////////////////////////////////////////////////////
+    // Application logic to handle transition                                                     //
+    // For example, you might start a service, run initialization, etc                            //
+    ////////////////////////////////////////////////////////////////////////////////////////////////
+  }
 
-      System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
+  @Transition(from = "ONLINE", to = "OFFLINE")
+  public void onBecomeOfflineFromOnline(Message message,
+      NotificationContext context) {
+    System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
 
-      ////////////////////////////////////////////////////////////////////////////////////////////////
-      // Application logic to handle transition                                                     //
-      // For example, you might shutdown a service, log this event, or change monitoring settings   //
-      ////////////////////////////////////////////////////////////////////////////////////////////////
-    }
+    ////////////////////////////////////////////////////////////////////////////////////////////////
+    // Application logic to handle transition                                                     //
+    // For example, you might shutdown a service, log this event, or change monitoring settings   //
+    ////////////////////////////////////////////////////////////////////////////////////////////////
   }
 }
 ```

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_propstore.md b/site-releases/trunk/src/site/markdown/tutorial_propstore.md
index ec0d71b..9fd1f9c 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_propstore.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_propstore.md
@@ -21,14 +21,14 @@ under the License.
   <title>Tutorial - Application Property Store</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Application Property Store
+## [Helix Tutorial](./Tutorial.html): Application Property Store
 
 In this chapter, we\'ll learn how to use the application property store.
 
 ### Property Store
 
-It is common that an application needs support for distributed, shared data structures.  Helix uses Zookeeper to store the application data and hence provides notifications when the data changes.
+It is common that an application needs support for distributed, shared data structures.  Helix uses ZooKeeper to store the application data and hence provides notifications when the data changes.
 
-While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
+While you could use ZooKeeper directly, Helix supports caching the data with a write-through cache. This is far more efficient than reading from ZooKeeper for every access.
 
 See [HelixManager.getHelixPropertyStore](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_rebalance.md b/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
index a664c7a..8599542 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
@@ -21,7 +21,7 @@ under the License.
   <title>Tutorial - Rebalancing Algorithms</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
+## [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
 
 The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
 
@@ -55,7 +55,7 @@ Helix has four options for rebalancing, in increasing order of customization by
 
 ### FULL_AUTO
 
-When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. 
+When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive.
 
 For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
 
@@ -105,10 +105,10 @@ If there are 3 nodes in the cluster, then Helix will balance the masters and sla
 }
 ```
 
-Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
-When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node.. 
+Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node.
+When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node..
 
-#### SEMI_AUTO
+### SEMI_AUTO
 
 When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
 
@@ -135,12 +135,12 @@ Example: In the ideal state below, the partition \'MyResource_0\' is constrained
 
 The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
 
-In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints. 
+In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints.
 
-#### CUSTOMIZED
+### CUSTOMIZED
 
-Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes. 
-Within this callback, the application can recompute the IdealState. Helix will then issue appropriate transitions such that _IdealState_ and _CurrentState_ converges.
+Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes.
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
 
 Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
 
@@ -170,12 +170,12 @@ Here\'s an example, again with 3 partitions, 2 replicas per partition, and the M
 }
 ```
 
-Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2. 
+Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2.
 
-#### USER_DEFINED
+### USER_DEFINED
 
 For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
 
-#### Backwards Compatibility
+### Backwards Compatibility
 
 In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_spectator.md b/site-releases/trunk/src/site/markdown/tutorial_spectator.md
index 24c1cf4..e43cd6b 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_spectator.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_spectator.md
@@ -21,46 +21,46 @@ under the License.
   <title>Tutorial - Spectator</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Spectator
+## [Helix Tutorial](./Tutorial.html): Spectator
 
-Next, we\'ll learn how to implement a Spectator.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
+Next, we\'ll learn how to implement a __spectator__.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
 
-### Start the Helix agent
+### Start a Connection
 
-Same as for a Participant, The Helix agent is the common component that connects each system component with the controller.
+Same as for a participant, The Helix manager is the common component that connects each system component with the cluster.
 
 It requires the following parameters:
 
 * clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
 * instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system.
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
 
-After the Helix manager instance is created, only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
-
-### Spectator Code
+After the Helix manager instance is created, the only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
 
 A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
 Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
 
 ```
 manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.PARTICIPANT,
-                                                          zkConnectString);
+                                                instanceName,
+                                                InstanceType.SPECTATOR,
+                                                zkConnectString);
 manager.connect();
 RoutingTableProvider routingTableProvider = new RoutingTableProvider();
 manager.addExternalViewChangeListener(routingTableProvider);
 ```
 
+### Spectator Code
+
 In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
 
 ```
-## instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
+// instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
 instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
 
 ////////////////////////////////////////////////////////////////////////////////////////////////
@@ -72,5 +72,4 @@ result = theInstance.sendRequest(yourApplicationRequest, responseObject);
 
 ```
 
-When the external view changes, the application needs to react by sending requests to a different instance.  
-
+When the external view changes, the application needs to react by sending requests to a different instance.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_state.md b/site-releases/trunk/src/site/markdown/tutorial_state.md
index 4f7b1b5..9fd0f9f 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_state.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_state.md
@@ -21,31 +21,31 @@ under the License.
   <title>Tutorial - State Machine Configuration</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): State Machine Configuration
+## [Helix Tutorial](./Tutorial.html): State Machine Configuration
 
 In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
 
-## State Models
+### State Models
 
-Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster. 
+Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster.
 Every resource that is added should be configured to use a state model that govern its _ideal state_.
 
-### MASTER-SLAVE
+#### MASTER-SLAVE
 
 * 3 states: OFFLINE, SLAVE, MASTER
 * Maximum number of masters: 1
 * Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
 
 
-### ONLINE-OFFLINE
+#### ONLINE-OFFLINE
 
 * Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
 
-### LEADER-STANDBY
+#### LEADER-STANDBY
 
 * 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
 
-## Constraints
+### Constraints
 
 In addition to the state machine configuration, one can specify the constraints of states and transitions.
 
@@ -54,38 +54,40 @@ For example, one can say:
 * MASTER:1
 <br/>Maximum number of replicas in MASTER state at any time is 1
 
-* OFFLINE-SLAVE:5 
+* OFFLINE-SLAVE:5
 <br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
 
-### Dynamic State Constraints
+#### Dynamic State Constraints
 
 We also support two dynamic upper bounds for the number of replicas in each state:
 
 * N: The number of replicas in the state is at most the number of live participants in the cluster
 * R: The number of replicas in the state is at most the specified replica count for the partition
 
-### State Priority
+#### State Priority
 
 Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
 
-### State Transition Priority
+#### State Transition Priority
 
 Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
 
-## Special States
+### Special States
 
-### DROPPED
+There are a few Helix-defined states that are important to be aware of.
+
+#### DROPPED
 
 The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
 
 * The DROPPED state must be defined
 * There must be a path to DROPPED for every state in the model
 
-### ERROR
+#### ERROR
 
 The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
 
-## Annotated Example
+### Annotated Example
 
 Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/4a4510d1/site-releases/trunk/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_throttling.md b/site-releases/trunk/src/site/markdown/tutorial_throttling.md
index 7417979..16a6f81 100644
--- a/site-releases/trunk/src/site/markdown/tutorial_throttling.md
+++ b/site-releases/trunk/src/site/markdown/tutorial_throttling.md
@@ -21,13 +21,13 @@ under the License.
   <title>Tutorial - Throttling</title>
 </head>
 
-# [Helix Tutorial](./Tutorial.html): Throttling
+## [Helix Tutorial](./Tutorial.html): Throttling
 
-In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge is capable of coordinating this decision.
+In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge (i.e. Helix) is capable of coordinating this decision.
 
 ### Throttling
 
-Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight, but some might involve moving data, which is quite expensive from a network and IOPS perspective.
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be lightweight, but some might involve moving data, which is quite expensive from a network and IOPS perspective.
 
 Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
 
@@ -36,3 +36,4 @@ Helix allows applications to set a threshold on transitions. The threshold can b
 * Resource e.g database
 * Node i.e per-node maximum transitions in parallel
 
+