You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by jx...@apache.org on 2018/07/24 06:01:36 UTC

[6/8] helix git commit: Add release note for 0.8.2

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_controller.md b/website/0.8.2/src/site/markdown/tutorial_controller.md
new file mode 100644
index 0000000..d3c5526
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_controller.md
@@ -0,0 +1,153 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Controller</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Controller
+
+Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
+
+### Start a Connection
+
+The Helix manager requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
+* instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                instanceType,
+                                                zkConnectString);
+```
+
+### Controller Code
+
+The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
+If you need additional functionality, see GenericHelixController on how to configure the pipeline.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.CONTROLLER,
+                                                zkConnectString);
+manager.connect();
+```
+The snippet above shows how the controller is started. You can also start the controller using command line interface.
+
+```
+cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+```
+
+### Controller Deployment Modes
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See the Controller as a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool features we added in Helix was to use a set of controllers to manage a large number of clusters.
+
+For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
+
+Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
+
+### Start the Helix agent
+
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
+    * PARTICIPANT: Process that performs the actual task in the distributed system.
+    * SPECTATOR: Process that observes the changes in the cluster.
+    * ADMIN: To carry out system admin actions.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
+
+```
+      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                      instanceName,
+                                                      instanceType,
+                                                      zkConnectString);
+```
+
+### Controller Code
+
+The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
+If you need additional functionality, see GenericHelixController on how to configure the pipeline.
+
+```
+      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.CONTROLLER,
+                                                          zkConnectString);
+     manager.connect();
+     GenericHelixController controller = new GenericHelixController();
+     manager.addConfigChangeListener(controller);
+     manager.addLiveInstanceChangeListener(controller);
+     manager.addIdealStateChangeListener(controller);
+     manager.addExternalViewChangeListener(controller);
+     manager.addControllerListener(controller);
+```
+The snippet above shows how the controller is started. You can also start the controller using command line interface.
+
+```
+cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+```
+
+### Controller Deployment Modes
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters.
+
+For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_health.md b/website/0.8.2/src/site/markdown/tutorial_health.md
new file mode 100644
index 0000000..03b1dcc
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_health.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Customizing Heath Checks</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Customizing Health Checks
+
+In this chapter, we\'ll learn how to customize health checks based on metrics of your distributed system.
+
+### Health Checks
+
+Note: _this in currently in development mode, not yet ready for production._
+
+Helix provides the ability for each node in the system to report health metrics on a periodic basis.
+
+Helix supports multiple ways to aggregate these metrics:
+
+* SUM
+* AVG
+* EXPONENTIAL DECAY
+* WINDOW
+
+Helix persists the aggregated value only.
+
+Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert.
+Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
+This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
+

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_messaging.md b/website/0.8.2/src/site/markdown/tutorial_messaging.md
new file mode 100644
index 0000000..4c1eca7
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_messaging.md
@@ -0,0 +1,70 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Messaging</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Messaging
+
+In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature that is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.
+
+### Example: Bootstrapping a Replica
+
+Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
+
+Helix provides a messaging API for intra-cluster communication between nodes in the system.  This API provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
+Since Helix is aware of the global state of the system, it can send the message to the appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
+System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
+
+```
+ClusterMessagingService messagingService = manager.getMessagingService();
+
+// Construct the Message
+Message requestBackupUriRequest = new Message(
+    MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+requestBackupUriRequest
+    .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+requestBackupUriRequest.setMsgState(MessageState.NEW);
+
+// Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
+Criteria recipientCriteria = new Criteria();
+recipientCriteria.setInstanceName("%");
+recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+recipientCriteria.setResource("MyDB");
+recipientCriteria.setPartition("");
+
+// Should be processed only by process(es) that are active at the time of sending the message
+// This means if the recipient is restarted after message is sent, it will not be processe.
+recipientCriteria.setSessionSpecific(true);
+
+// wait for 30 seconds
+int timeout = 30000;
+
+// the handler that will be invoked when any recipient responds to the message.
+BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+
+// this will return only after all recipients respond or after timeout
+int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+    requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.DefaultMessagingService in the [Javadocs](http://helix.apache.org/javadocs/0.8.2/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more information.

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_participant.md b/website/0.8.2/src/site/markdown/tutorial_participant.md
new file mode 100644
index 0000000..cb38e45
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_participant.md
@@ -0,0 +1,102 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Participant</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Participant
+
+In this chapter, we\'ll learn how to implement a __Participant__, which is a primary functional component of a distributed system.
+
+
+### Start a Connection
+
+The Helix manager is a common component that connects each system component with the controller.
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
+* instanceType: Type of the process. This can be one of the following types, in this case, use PARTICIPANT
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
+
+After the Helix manager instance is created, the only thing that needs to be registered is the state model factory.
+The methods of the state model will be called when controller sends transitions to the participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
+
+* MasterSlaveStateModelFactory
+* LeaderStandbyStateModelFactory
+* BootstrapHandler
+
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.PARTICIPANT,
+                                                zkConnectString);
+StateMachineEngine stateMach = manager.getStateMachineEngine();
+
+//create a stateModelFactory that returns a statemodel object for each partition.
+stateModelFactory = new OnlineOfflineStateModelFactory();
+stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
+manager.connect();
+```
+
+### Example State Model Factory
+
+Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
+
+```
+public class OnlineOfflineStateModelFactory extends
+    StateModelFactory<StateModel> {
+  @Override
+  public StateModel createNewStateModel(String stateUnitKey) {
+    OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
+    return stateModel;
+  }
+  @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
+  public static class OnlineOfflineStateModel extends StateModel {
+    @Transition(from = "OFFLINE", to = "ONLINE")
+    public void onBecomeOnlineFromOffline(Message message,
+        NotificationContext context) {
+      System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might start a service, run initialization, etc                            //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+    }
+
+    @Transition(from = "ONLINE", to = "OFFLINE")
+    public void onBecomeOfflineFromOnline(Message message,
+        NotificationContext context) {
+      System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might shutdown a service, log this event, or change monitoring settings   //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+    }
+  }
+}
+```

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_propstore.md b/website/0.8.2/src/site/markdown/tutorial_propstore.md
new file mode 100644
index 0000000..6157477
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_propstore.md
@@ -0,0 +1,34 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Application Property Store</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Application Property Store
+
+In this chapter, we\'ll learn how to use the application property store.
+
+### Property Store
+
+It is common that an application needs support for distributed, shared data structures.  Helix uses ZooKeeper to store the application data and hence provides notifications when the data changes.
+
+While you could use ZooKeeper directly, Helix supports caching the data with a write-through cache. This is far more efficient than reading from ZooKeeper for every access.
+
+See [HelixManager.getHelixPropertyStore](http://helix.apache.org/javadocs/0.8.2/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_rebalance.md b/website/0.8.2/src/site/markdown/tutorial_rebalance.md
new file mode 100644
index 0000000..2e1a79b
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_rebalance.md
@@ -0,0 +1,181 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Rebalancing Algorithms</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
+
+The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
+
+This means given a number of partitions, replicas and number of nodes, Helix does the automatic assignment of partition to nodes such that:
+
+* Each node has the same number of partitions
+* Replicas of the same partition do not stay on the same node
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above criteria
+
+Helix employs a rebalancing algorithm to compute the _ideal state_ of the system.  When the _current state_ differs from the _ideal state_, Helix uses it as the target state of the system and computes the appropriate transitions needed to bring it to the _ideal state_.
+
+Helix makes it easy to perform this operation, while giving you control over the algorithm.  In this section, we\'ll see how to implement the desired behavior.
+
+Helix has four options for rebalancing, in increasing order of customization by the system builder:
+
+* FULL_AUTO
+* SEMI_AUTO
+* CUSTOMIZED
+* USER_DEFINED
+
+```
+            |FULL_AUTO     |  SEMI_AUTO | CUSTOMIZED|  USER_DEFINED  |
+            ---------------------------------------------------------|
+   LOCATION | HELIX        |  APP       |  APP      |      APP       |
+            ---------------------------------------------------------|
+      STATE | HELIX        |  HELIX     |  APP      |      APP       |
+            ----------------------------------------------------------
+```
+
+
+### FULL_AUTO
+
+When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive.
+
+For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "FULL_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally.  The ideal state is therefore:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node.
+When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node..
+
+### SEMI_AUTO
+
+When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
+
+Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2.  The choice of _state_ is still controlled by Helix.  That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+
+The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
+
+In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints.
+
+### CUSTOMIZED
+
+Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes.
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
+
+Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "CUSTOMIZED",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Suppose the current state of the system is 'MyResource_0' \-\> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' \-\> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER\-\-\>SLAVE to N1 and SLAVE\-\-\>MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER\-\-\>SLAVE to N1 and after it is completed, it will issue SLAVE\-\-\>MASTER to N2.
+
+### USER_DEFINED
+
+For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
+
+### Backwards Compatibility
+
+In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_rest_service.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_rest_service.md b/website/0.8.2/src/site/markdown/tutorial_rest_service.md
new file mode 100644
index 0000000..cfcaaef
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_rest_service.md
@@ -0,0 +1,951 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - REST Service 2.0</title>
+</head>
+
+
+
+## [Helix Tutorial](./Tutorial.html): REST Service 2.0
+
+New Helix REST service supported features:
+
+* Expose all admin operations via restful API.
+    * All of Helix admin operations, include these defined in HelixAdmin.java and ConfigAccessor.java, etc, are exposed via rest API.
+* Support all task framework API via restful.Current task framework operations are supported from rest API too.
+* More standard Restful API
+    * Use the standard HTTP methods if possible, GET, POST, PUT, DELETE, instead of customized command as it today.
+    * Customized command will be used if there is no corresponding HTTP methods, for example, rebalance a resource, disable an instance, etc.
+* Make Helix restful service an separately deployable service.
+* Enable Access/Audit log for all write access.
+
+### Installation
+The command line tool comes with helix-core package:
+
+Get the command line tool:
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix
+git checkout tags/helix-0.8.2
+./build
+cd helix-rest/target/helix-rest-pkg/bin
+chmod +x *.sh
+```
+
+Get help:
+
+```
+./run-rest-admin.sh --help
+```
+
+Start the REST server
+
+```
+./run-rest-admin.sh --port 1234 --zkSvr localhost:2121
+```
+
+### Helix REST 2.0 Endpoint
+
+Helix REST 2.0 endpoint will start with /admin/v2 prefix, and the rest will mostly follow the current URL convention.  This allows us to support v2.0 endpoint at the same time with the current Helix web interface. Some sample v2.0 endpoints would look like the following:
+
+```
+curl -X GET http://localhost:12345/admin/v2/clusters
+curl -X POST http://localhost:12345/admin/v2/clusters/myCluster
+curl -X POST http://localhost:12345/admin/v2/clusters/myCluster?command=activate&supercluster=controler_cluster
+curl http://localhost:12345/admin/v2/clusters/myCluster/resources/myResource/IdealState
+```
+### REST Endpoints and Supported Operations
+#### Operations on Helix Cluster
+
+* **"/clusters"**
+    *  Represents all helix managed clusters connected to given zookeeper
+    *  **GET** -- List all Helix managed clusters. Example: curl http://localhost:1234/admin/v2/clusters
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters
+    {
+      "clusters" : [ "cluster1", "cluster2", "cluster3"]
+    }
+    ```
+
+
+* **"/clusters/{clusterName}"**
+    * Represents a helix cluster with name {clusterName}
+    * **GET** -- return the cluster info. Example: curl http://localhost:1234/admin/v2/clusters/myCluster
+
+        ```
+        $curl http://localhost:1234/admin/v2/clusters/myCluster
+        {
+          "id" : "myCluster",
+          "paused" : true,
+          "disabled" : true,
+          "controller" : "helix.apache.org:1234",
+          "instances" : [ "aaa.helix.apache.org:1234", "bbb.helix.apache.org:1234" ],
+          "liveInstances" : ["aaa.helix.apache.org:1234"],
+          "resources" : [ "resource1", "resource2", "resource3" ],
+          "stateModelDefs" : [ "MasterSlave", "LeaderStandby", "OnlineOffline" ]
+        }
+        ```
+
+    * **PUT** – create a new cluster with {clusterName}, it returns 200 if the cluster already exists. Example: curl -X PUT http://localhost:1234/admin/v2/clusters/myCluster
+    * **DELETE** – delete this cluster.
+      Example: curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster
+    * **activate** -- Link this cluster to a Helix super (controller) cluster, i.e, add the cluster as a resource to the super cluster.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=activate&superCluster=myCluster
+    * **expand** -- In the case that a set of new node is added in the cluster, use this command to balance the resources on the existing instances to new added instances.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=expand
+    * **enable** – enable/resume the cluster.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=enable
+    * **disable** – disable/pause the cluster.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=disable
+
+* **"/clusters/{clusterName}/configs"**
+    * Represents cluster level configs for cluster with {clusterName}
+    * **GET**: get all configs.
+    
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/configs
+    {
+      "id" : "myCluster",
+      "simpleFields" : {
+        "PERSIST_BEST_POSSIBLE_ASSIGNMENT" : "true"
+      },
+      "listFields" : {
+      },
+      "mapFields" : {
+      }
+    }
+    ```
+
+    * **POST**: update or delete one/some config entries.  
+    update -- Update the entries included in the input.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/configs?command=update -d '
+    {
+     "id" : "myCluster",
+      "simpleFields" : {
+        "PERSIST_BEST_POSSIBLE_ASSIGNMENT" : "true"
+      },
+      "listFields" : {
+        "disabledPartition" : ["p1", "p2", "p3"]
+      },
+      "mapFields" : {
+      }
+    }'
+    ```
+  
+      delete -- Remove the entries included in the input from current config.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/configs?command=update -d '
+    {
+      "id" : "myCluster",
+      "simpleFields" : {
+      },
+      "listFields" : {
+        "disabledPartition" : ["p1", "p3"]
+      },
+      "mapFields" : {
+      }
+    }'
+    ```
+
+* **"/clusters/{clusterName}/controller"**
+    * Represents the controller for cluster {clusterName}.
+    * **GET** – return controller information
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/controller
+    {
+      "id" : "myCluster",
+      "controller" : "test.helix.apache.org:1234",
+      "HELIX_VERSION":"0.8.2",
+      "LIVE_INSTANCE":"16261@test.helix.apache.org:1234",
+      "SESSION_ID":"35ab496aba54c99"
+    }
+    ```
+
+* **"/clusters/{clusterName}/controller/errors"**
+    * Represents error information for the controller of cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** – get all error information.
+    * **DELETE** – clean up all error logs.
+
+
+* **"/clusters/{clusterName}/controller/history"**
+    * Represents the change history of leader controller of cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** – get the leader controller history.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/controller/history
+    {
+      "id" : "myCluster",
+      "history" [
+          "{DATE=2017-03-21-16:57:14, CONTROLLER=test1.helix.apache.org:1234, TIME=1490115434248}",
+          "{DATE=2017-03-27-22:35:16, CONTROLLER=test3.helix.apache.org:1234, TIME=1490654116484}",
+          "{DATE=2017-03-27-22:35:24, CONTROLLER=test2.helix.apache.org:1234, TIME=1490654124926}"
+      ]
+    }
+    ```
+
+* **/clusters/{clusterName}/controller/messages"**
+    * Represents all uncompleted messages currently received by the controller of cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** – list all uncompleted messages received by the controller.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/controller/messages
+    {
+      "id" : "myCluster",
+      "count" : 5,
+      "messages" [
+          "0b8df4f2-776c-4325-96e7-8fad07bd9048",
+          "13a8c0af-b77e-4f5c-81a9-24fedb62cf58"
+      ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/controller/messages/{messageId}"**
+    * Represents the messages currently received by the controller of cluster {clusterName} with id {messageId}. This is new endpoint in v2.0.
+    * **GET** - get the message with {messageId} received by the controller.
+    * **DELETE** - delete the message with {messageId}
+
+
+* **"/clusters/{clusterName}/statemodeldefs/"**
+    * Represents all the state model definitions defined in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get all the state model definition in the cluster.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/statemodeldefs
+    {
+      "id" : "myCluster",
+      "stateModelDefs" : [ "MasterSlave", "LeaderStandby", "OnlineOffline" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/statemodeldefs/{statemodeldef}"**
+    * Represents the state model definition {statemodeldef} defined in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get the state model definition
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/statemodeldefs/MasterSlave
+    {
+      "id" : "MasterSlave",
+      "simpleFields" : {
+        "INITIAL_STATE" : "OFFLINE"
+      },
+      "mapFields" : {
+        "DROPPED.meta" : {
+          "count" : "-1"
+        },
+        "ERROR.meta" : {
+          "count" : "-1"
+        },
+        "ERROR.next" : {
+          "DROPPED" : "DROPPED",
+          "OFFLINE" : "OFFLINE"
+        },
+        "MASTER.meta" : {
+          "count" : "1"
+        },
+        "MASTER.next" : {
+          "SLAVE" : "SLAVE",
+          "DROPPED" : "SLAVE",
+          "OFFLINE" : "SLAVE"
+        },
+        "OFFLINE.meta" : {
+          "count" : "-1"
+        },
+        "OFFLINE.next" : {
+          "SLAVE" : "SLAVE",
+          "MASTER" : "SLAVE",
+          "DROPPED" : "DROPPED"
+        },
+        "SLAVE.meta" : {
+          "count" : "R"
+        },
+        "SLAVE.next" : {
+          "MASTER" : "MASTER",
+          "DROPPED" : "OFFLINE",
+          "OFFLINE" : "OFFLINE"
+        }
+      },
+      "listFields" : {
+        "STATE_PRIORITY_LIST" : [ "MASTER", "SLAVE", "OFFLINE", "DROPPED", "ERROR" ],
+        "STATE_TRANSITION_PRIORITYLIST" : [ "MASTER-SLAVE", "SLAVE-MASTER", "OFFLINE-SLAVE", "SLAVE-OFFLINE", "OFFLINE-DROPPED" ]
+      }
+    }
+    ```
+
+    * **POST** - add a new state model definition with {statemodeldef}
+    * **DELETE** - delete the state model definition
+
+
+#### Helix "Resource" and its sub-resources
+
+* **"/clusters/{clusterName}/resources"**
+    * Represents all resources in a cluster.
+    * **GET** - list all resources with their IdealStates and ExternViews.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources
+    {
+      "id" : "myCluster",
+      "idealstates" : [ "idealstate1", "idealstate2", "idealstate3" ],
+      "externalviews" : [ "idealstate1", "idealstate3" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/resources/{resourceName}"**
+    * Represents a resource in cluster {clusterName} with name {resourceName}
+    * **GET** - get resource info
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/resource1
+    {
+      "id" : "resource1",
+      "resourceConfig" : {},
+      "idealState" : {},
+      "externalView" : {}
+    }
+    ```
+
+    * **PUT** - add a resource with {resourceName}
+
+    ```
+    $curl -X PUT -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource -d '
+    {
+      "id":"myResource",
+      "simpleFields":{
+        "STATE_MODEL_FACTORY_NAME":"DEFAULT"
+        ,"EXTERNAL_VIEW_DISABLED":"true"
+        ,"NUM_PARTITIONS":"1"
+        ,"REBALANCE_MODE":"TASK"
+        ,"REPLICAS":"1"
+        ,"IDEAL_STATE_MODE":"AUTO"
+        ,"STATE_MODEL_DEF_REF":"Task"
+        ,"REBALANCER_CLASS_NAME":"org.apache.helix.task.WorkflowRebalancer"
+      }
+    }'
+    ```
+
+    * **DELETE** - delete a resource. Example: curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource
+    * **enable** enable the resource. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource?command=enable
+    * **disable** - disable the resource. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource?command=disable
+    * **rebalance** - rebalance the resource. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource?command=rebalance
+
+* **"/clusters/{clusterName}/resources/{resourceName}/idealState"**
+    * Represents the ideal state of a resource with name {resourceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get idealstate.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource/idealState
+    {
+      "id":"myResource"
+      ,"simpleFields":{
+        "IDEAL_STATE_MODE":"AUTO"
+        ,"NUM_PARTITIONS":"2"
+        ,"REBALANCE_MODE":"SEMI_AUTO"
+        ,"REPLICAS":"2"
+        ,"STATE_MODEL_DEF_REF":"MasterSlave"
+      }
+      ,"listFields":{
+        "myResource_0":["host1", "host2"]
+        ,"myResource_1":["host2", "host1"]
+      }
+      ,"mapFields":{
+        "myResource_0":{
+          "host1":"MASTER"
+          ,"host2":"SLAVE"
+        }
+        ,"myResource_1":{
+          "host1":"SLAVE"
+          ,"host2":"MASTER"
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/resources/{resourceName}/externalView"**
+    * Represents the external view of a resource with name {resourceName} in cluster {clusterName}
+    * **GET** - get the externview
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource/externalView
+    {
+      "id":"myResource"
+      ,"simpleFields":{
+        "IDEAL_STATE_MODE":"AUTO"
+        ,"NUM_PARTITIONS":"2"
+        ,"REBALANCE_MODE":"SEMI_AUTO"
+        ,"REPLICAS":"2"
+        ,"STATE_MODEL_DEF_REF":"MasterSlave"
+      }
+      ,"listFields":{
+        "myResource_0":["host1", "host2"]
+        ,"myResource_1":["host2", "host1"]
+      }
+      ,"mapFields":{
+        "myResource_0":{
+          "host1":"MASTER"
+          ,"host2":"OFFLINE"
+        }
+        ,"myResource_1":{
+          "host1":"SLAVE"
+          ,"host2":"MASTER"
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/resources/{resourceName}/configs"**
+    * Represents resource level of configs for resource with name {resourceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get resource configs.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource/configs
+    {
+      "id":"myDB"
+      "UserDefinedProperty" : "property"
+    }
+    ```
+
+#### Helix Instance and its sub-resources
+
+* **"/clusters/{clusterName}/instances"**
+    * Represents all instances in a cluster {clusterName}
+    * **GET** - list all instances in this cluster.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances
+    {
+      "id" : "myCluster",
+      "instances" : [ "host1", "host2", "host3", "host4"],
+      "online" : ["host1", "host4"],
+      "disabled" : ["host2"]
+    }
+    ```
+
+    * **POST** - enable/disable instances.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/command=enable -d
+    {
+      "instances" : [ "host1", "host3" ]
+    }
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/command=disable -d
+    {
+      "instances" : [ "host2", "host4" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}"**
+    * Represents a instance in cluster {clusterName} with name {instanceName}
+    * **GET** - get instance information.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234
+    {
+      "id" : "host_1234",
+      "configs" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host",
+        "HELIX_PORT" : "1234",
+        "HELIX_DISABLED_PARTITION" : [ ]
+      }
+      "liveInstance" : {
+        "HELIX_VERSION":"0.6.6.3",
+        "LIVE_INSTANCE":"4526@host",
+        "SESSION_ID":"359619c2d7efc14"
+      }
+    }
+    ```
+
+    * **PUT** - add a new instance with {instanceName}
+
+    ```
+    $curl -X PUT -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234 -d '
+    {
+      "id" : "host_1234",
+      "simpleFields" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host",
+        "HELIX_PORT" : "1234",
+      }
+    }'
+    ```
+  
+    There's one important restriction for this operation: the {instanceName} should match exactly HELIX_HOST + "_" + HELIX_PORT. For example, if host is localhost, and port is 1234, the instance name should be localhost_1234. Otherwise, the response won't contain any error but the configurations are not able to be filled in.
+
+    * **DELETE** - delete the instance. Example: curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234
+    * **enable** - enable the instance. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=enable
+    * **disable** - disable the instance. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=disable
+
+    * **addInstanceTag** -  add tags to this instance.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=addInstanceTag -d '
+    {
+      "id" : "host_1234",
+      "instanceTags" : [ "tag_1", "tag_2, "tag_3" ]
+    }'
+    ```
+
+    * **removeInstanceTag** - remove a tag from this instance.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=removeInstanceTag -d '
+    {
+      "id" : "host_1234",
+      "instanceTags" : [ "tag_1", "tag_2, "tag_3" ]
+    }'
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/resources"**
+    * Represents all resources and their partitions locating on the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return all resources that have partitions in the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/resources
+    {
+      "id" : "host_1234",
+      "resources" [ "myResource1", "myResource2", "myResource3"]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/resources/{resource}"**
+    * Represents all partitions of the {resource}  locating on the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return all partitions of the resource in the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/localhost_1234/resources/myResource1
+    {
+      "id":"myResource1"
+      ,"simpleFields":{
+        "STATE_MODEL_DEF":"MasterSlave"
+        ,"STATE_MODEL_FACTORY_NAME":"DEFAULT"
+        ,"BUCKET_SIZE":"0"
+        ,"SESSION_ID":"359619c2d7f109b"
+      }
+      ,"listFields":{
+      }
+      ,"mapFields":{
+        "myResource1_2":{
+          "CURRENT_STATE":"SLAVE"
+          ,"INFO":""
+        }
+        ,"myResource1_3":{
+          "CURRENT_STATE":"MASTER"
+          ,"INFO":""
+        }
+        ,"myResource1_0":{
+          "CURRENT_STATE":"MASTER"
+          ,"INFO":""
+        }
+        ,"myResource1_1":{
+          "CURRENT_STATE":"SLAVE"
+          ,"INFO":""
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/configs"**
+    * Represents instance configs in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return configs for the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/configs 
+    {
+      "id":"host_1234"
+      "configs" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host"
+        "HELIX_PORT" : "1234",
+        "HELIX_DISABLED_PARTITION" : [ ]
+    }
+    ```
+
+    * **PUT** - PLEASE NOTE THAT THIS PUT IS FULLY OVERRIDE THE INSTANCE CONFIG
+
+    ```
+    $curl -X PUT -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/configs
+    {
+      "id":"host_1234"
+      "configs" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host"
+        "HELIX_PORT" : "1234",
+        "HELIX_DISABLED_PARTITION" : [ ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/errors"**
+    * List all the mapping of sessionId to partitions of resources. This is new endpoint in v2.0.
+    * **GET** - get mapping
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/errors
+    {
+       "id":"host_1234"
+       "errors":{
+            "35sfgewngwese":{
+                "resource1":["p1","p2","p5"],
+                "resource2":["p2","p7"]
+             }
+        }
+    }
+    ```
+
+    * **DELETE** - clean up all error information from Helix.
+
+* **"/clusters/{clusterName}/instances/{instanceName}/errors/{sessionId}/{resourceName}/{partitionName}"**
+    * Represents error information for the partition {partitionName} of the resource {resourceName} under session {sessionId} in instance with {instanceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get all error information.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/errors/35sfgewngwese/resource1/p1
+    {
+      "id":"35sfgewngwese_resource1"
+      ,"simpleFields":{
+      }
+      ,"listFields":{
+      }
+      ,"mapFields":{
+        "HELIX_ERROR     20170521-070822.000561 STATE_TRANSITION b819a34d-41b5-4b42-b497-1577501eeecb":{
+          "AdditionalInfo":"Exception while executing a state transition task ..."
+          ,"MSG_ID":"4af79e51-5f83-4892-a271-cfadacb0906f"
+          ,"Message state":"READ"
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/history"**
+    * Represents instance session change history for the instance with {instanceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get the instance change history.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/history
+    {
+      "id": "host_1234",
+      "LAST_OFFLINE_TIME": "183948792",
+      "HISTORY": [
+        "{DATE=2017-03-02T19:25:18:915, SESSION=459014c82ef3f5b, TIME=1488482718915}",
+        "{DATE=2017-03-10T22:24:53:246, SESSION=15982390e5d5c91, TIME=1489184693246}",
+        "{DATE=2017-03-11T02:03:52:776, SESSION=15982390e5d5d85, TIME=1489197832776}",
+        "{DATE=2017-03-13T18:15:00:778, SESSION=15982390e5d678d, TIME=1489428900778}",
+        "{DATE=2017-03-21T02:47:57:281, SESSION=459014c82effa82, TIME=1490064477281}",
+        "{DATE=2017-03-27T14:51:06:802, SESSION=459014c82f01a07, TIME=1490626266802}",
+        "{DATE=2017-03-30T00:05:08:321, SESSION=5590151804e2c78, TIME=1490832308321}",
+        "{DATE=2017-03-30T01:17:34:339, SESSION=2591d53b0421864, TIME=1490836654339}",
+        "{DATE=2017-03-30T17:31:09:880, SESSION=2591d53b0421b2a, TIME=1490895069880}",
+        "{DATE=2017-03-30T18:05:38:220, SESSION=359619c2d7f109b, TIME=1490897138220}"
+      ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/messages"**
+    * Represents all uncompleted messages currently received by the instance. This is new endpoint in v2.0.
+    * **GET** - list all uncompleted messages received by the controller.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/messages
+    {
+      "id": "host_1234",
+      "new_messages": ["0b8df4f2-776c-4325-96e7-8fad07bd9048", "13a8c0af-b77e-4f5c-81a9-24fedb62cf58"],
+      "read_messages": ["19887b07-e9b8-4fa6-8369-64146226c454"]
+      "total_message_count" : 100,
+      "read_message_count" : 50
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/messages/{messageId}**
+    * Represents the messages currently received by by the instance with message given message id. This is new endpoint in v2.0.
+    * **GET** - get the message content with {messageId} received by the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/localhost_1234/messages/0b8df4f2-776c-4325-96e7-8fad07bd9048
+    {
+      "id": "0b8df4f2-776c-4325-96e7-8fad07bd9048",
+      "CREATE_TIMESTAMP":"1489997469400",
+      "ClusterEventName":"messageChange",
+      "FROM_STATE":"OFFLINE",
+      "MSG_ID":"0b8df4f2-776c-4325-96e7-8fad07bd9048",
+      "MSG_STATE":"new",
+      "MSG_TYPE":"STATE_TRANSITION",
+      "PARTITION_NAME":"Resource1_243",
+      "RESOURCE_NAME":"Resource1",
+      "SRC_NAME":"controller_1234",
+      "SRC_SESSION_ID":"15982390e5d5a76",
+      "STATE_MODEL_DEF":"LeaderStandby",
+      "STATE_MODEL_FACTORY_NAME":"myFactory",
+      "TGT_NAME":"host_1234",
+      "TGT_SESSION_ID":"459014c82efed9b",
+      "TO_STATE":"DROPPED"
+    }
+    ```
+
+    * **DELETE** - delete the message with {messageId}. Example: $curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/messages/0b8df4f2-776c-4325-96e7-8fad07bd9048
+
+* **"/clusters/{clusterName}/instances/{instanceName}/healthreports"**
+    * Represents all health reports in the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return the name of health reports collected from the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/healthreports
+    {
+      "id" : "host_1234",
+      "healthreports" [ "report1", "report2", "report3" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/healthreports/{reportName}"**
+    * Represents the health report with {reportName} in the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return the content of health report collected from the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/healthreports/ClusterStateStats
+    {
+      "id":"ClusterStateStats"
+      ,"simpleFields":{
+        "CREATE_TIMESTAMP":"1466753504476"
+        ,"TimeStamp":"1466753504476"
+      }
+      ,"listFields":{
+      }
+      ,"mapFields":{
+        "UserDefinedData":{
+          "Data1":"0"
+          ,"Data2":"0.0"
+        }
+      }
+    }
+    ```
+
+
+#### Helix Workflow and its sub-resources
+
+* **"/clusters/{clusterName}/workflows"**
+    * Represents all workflows in cluster {clusterName}
+    * **GET** - list all workflows in this cluster. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows
+
+    ```
+    {
+      "Workflows" : [ "Workflow1", "Workflow2" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}"**
+    * Represents workflow with name {workflowName} in cluster {clusterName}
+    * **GET** - return workflow information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1
+
+    ```
+    {
+       "id" : "Workflow1",
+       "WorkflowConfig" : {
+           "Expiry" : "43200000",
+           "FailureThreshold" : "0",
+           "IsJobQueue" : "true",
+           "LAST_PURGE_TIME" : "1490820801831",
+           "LAST_SCHEDULED_WORKFLOW" : "Workflow1_20170329T000000",
+           "ParallelJobs" : "1",
+           "RecurrenceInterval" : "1",
+           "RecurrenceUnit" : "DAYS",
+           "START_TIME" : "1482176880535",
+           "STATE" : "STOPPED",
+           "StartTime" : "12-19-2016 00:00:00",
+           "TargetState" : "START",
+           "Terminable" : "false",
+           "capacity" : "500"
+        },
+       "WorkflowContext" : {
+           "JOB_STATES": {
+             "Job1": "COMPLETED",
+             "Job2": "COMPLETED"
+           },
+           "StartTime": {
+             "Job1": "1490741582339",
+             "Job2": "1490741580204"
+           },
+           "FINISH_TIME": "1490741659135",
+           "START_TIME": "1490741580196",
+           "STATE": "COMPLETED"
+       },
+       "Jobs" : ["Job1","Job2","Job3"],
+       "ParentJobs" : {
+            "Job1":["Job2", "Job3"],
+            "Job2":["Job3"]
+       }
+    }
+    ```
+
+    * **PUT** - create a workflow with {workflowName}. Example : curl -X PUT -H "Content-Type: application/json" -d [WorkflowExample.json](./WorkflowExample.json) http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1
+    * **DELETE** - delete the workflow. Example : curl -X DELETE http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1
+    * **start** - start the workflow. Example : curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=start
+    * **stop** - pause the workflow. Example : curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=stop
+    * **resume** - resume the workflow. Example : curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=resume
+    * **cleanup** - cleanup all expired jobs in the workflow, this operation is only allowed if the workflow is a JobQueue. Example : curl -X POST -H "Content-Type: application/json"  http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=clean
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/configs"**
+    * Represents workflow config with name {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return workflow configs. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/configs
+
+    ```
+    {
+        "id": "Workflow1",
+        "Expiry" : "43200000",
+        "FailureThreshold" : "0",
+        "IsJobQueue" : "true",
+        "START_TIME" : "1482176880535",
+        "StartTime" : "12-19-2016 00:00:00",
+        "TargetState" : "START",
+        "Terminable" : "false",
+        "capacity" : "500"
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/context"**
+    * Represents workflow runtime information with name {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return workflow runtime information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/context
+
+    ```
+    {
+        "id": "WorkflowContext",
+        "JOB_STATES": {
+             "Job1": "COMPLETED",
+             "Job2": "COMPLETED"
+         },
+         "StartTime": {
+             "Job1": "1490741582339",
+             "Job2": "1490741580204"
+         },
+         "FINISH_TIME": "1490741659135",
+         "START_TIME": "1490741580196",
+         "STATE": "COMPLETED"
+    }
+    ```
+
+
+#### Helix Job and its sub-resources
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs"**
+    * Represents all jobs in workflow {workflowName} in cluster {clusterName}
+    * **GET** return all job names in this workflow. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs
+
+    ```
+    {
+        "id":"Jobs"
+        "Jobs":["Job1","Job2","Job3"]
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs/{jobName}"**
+    * Represents job with {jobName} within {workflowName} in cluster {clusterName}
+    * **GET** return job information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1
+
+    ```
+    {
+        "id":"Job1"
+        "JobConfig":{
+            "WorkflowID":"Workflow1",
+            "IgnoreDependentJobFailure":"false",
+            "MaxForcedReassignmentsPerTask":"3"
+        },
+        "JobContext":{
+    	"START_TIME":"1491005863291",
+            "FINISH_TIME":"1491005902612",
+            "Tasks":[
+                 {
+                     "id":"0",
+                     "ASSIGNED_PARTICIPANT":"P1",
+                     "FINISH_TIME":"1491005898905"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_0"
+                 },
+                 {
+                     "id":"1",
+                     "ASSIGNED_PARTICIPANT":"P5",
+                     "FINISH_TIME":"1491005895443"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_1"
+                 }
+             ]
+         }
+    }
+    ```
+
+    * **PUT** - insert a job with {jobName} into the workflow, this operation is only allowed if the workflow is a JobQueue.  
+      Example : curl -X PUT -H "Content-Type: application/json" -d [JobExample.json](./JobExample.json) http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1
+    * **DELETE** - delete the job from the workflow, this operation is only allowed if the workflow is a JobQueue.  
+      Example : curl -X DELETE http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs/{jobName}/configs"**
+    * Represents job config for {jobName} within workflow {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return job config. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1/configs
+
+    ```
+    {
+      "id":"JobConfig"
+      "WorkflowID":"Workflow1",
+      "IgnoreDependentJobFailure":"false",
+      "MaxForcedReassignmentsPerTask":"3"
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs/{jobName}/context"**
+    * Represents job runtime information with {jobName} in {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return job runtime information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1/context
+
+    ```
+    {
+       "id":"JobContext":
+       "START_TIME":"1491005863291",
+       "FINISH_TIME":"1491005902612",
+       "Tasks":[
+                 {
+                     "id":"0",
+                     "ASSIGNED_PARTICIPANT":"P1",
+                     "FINISH_TIME":"1491005898905"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_0"
+                 },
+                 {
+                     "id":"1",
+                     "ASSIGNED_PARTICIPANT":"P5",
+                     "FINISH_TIME":"1491005895443"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_1"
+                 }
+       ]
+    }
+    ```

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_spectator.md b/website/0.8.2/src/site/markdown/tutorial_spectator.md
new file mode 100644
index 0000000..e43cd6b
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_spectator.md
@@ -0,0 +1,75 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Spectator</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Spectator
+
+Next, we\'ll learn how to implement a __spectator__.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
+
+### Start a Connection
+
+Same as for a participant, The Helix manager is the common component that connects each system component with the cluster.
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
+* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
+
+After the Helix manager instance is created, the only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
+
+A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
+Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.SPECTATOR,
+                                                zkConnectString);
+manager.connect();
+RoutingTableProvider routingTableProvider = new RoutingTableProvider();
+manager.addExternalViewChangeListener(routingTableProvider);
+```
+
+### Spectator Code
+
+In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
+
+```
+// instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
+instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
+
+////////////////////////////////////////////////////////////////////////////////////////////////
+// Application-specific code to send a request to one of the instances                        //
+////////////////////////////////////////////////////////////////////////////////////////////////
+
+theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
+result = theInstance.sendRequest(yourApplicationRequest, responseObject);
+
+```
+
+When the external view changes, the application needs to react by sending requests to a different instance.

http://git-wip-us.apache.org/repos/asf/helix/blob/4a121ef2/website/0.8.2/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/website/0.8.2/src/site/markdown/tutorial_state.md b/website/0.8.2/src/site/markdown/tutorial_state.md
new file mode 100644
index 0000000..856b8b3
--- /dev/null
+++ b/website/0.8.2/src/site/markdown/tutorial_state.md
@@ -0,0 +1,131 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - State Machine Configuration</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): State Machine Configuration
+
+In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
+
+### State Models
+
+Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster.
+Every resource that is added should be configured to use a state model that govern its _ideal state_.
+
+#### MASTER-SLAVE
+
+* 3 states: OFFLINE, SLAVE, MASTER
+* Maximum number of masters: 1
+* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
+
+
+#### ONLINE-OFFLINE
+
+* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
+
+#### LEADER-STANDBY
+
+* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
+
+### Constraints
+
+In addition to the state machine configuration, one can specify the constraints of states and transitions.
+
+For example, one can say:
+
+* MASTER:1
+<br/>Maximum number of replicas in MASTER state at any time is 1
+
+* OFFLINE-SLAVE:5
+<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
+
+#### Dynamic State Constraints
+
+We also support two dynamic upper bounds for the number of replicas in each state:
+
+* N: The number of replicas in the state is at most the number of live participants in the cluster
+* R: The number of replicas in the state is at most the specified replica count for the partition
+
+#### State Priority
+
+Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
+
+#### State Transition Priority
+
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
+
+### Special States
+
+There are a few Helix-defined states that are important to be aware of.
+
+#### DROPPED
+
+The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
+
+* The DROPPED state must be defined
+* There must be a path to DROPPED for every state in the model
+
+#### ERROR
+
+The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
+
+### Annotated Example
+
+Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
+
+```
+StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
+  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
+  .initialState("OFFLINE")
+
+  // Lowest number here indicates highest priority, no value indicates lowest priority
+  .addState("MASTER", 1)
+  .addState("SLAVE", 2)
+  .addState("OFFLINE")
+
+  // Note the special inclusion of the DROPPED state (REQUIRED)
+  .addState(HelixDefinedState.DROPPED.toString())
+
+  // No more than one master allowed
+  .upperBound("MASTER", 1)
+
+  // R indicates an upper bound of number of replicas for each partition
+  .dynamicUpperBound("SLAVE", "R")
+
+  // Add some high-priority transitions
+  .addTransition("SLAVE", "MASTER", 1)
+  .addTransition("OFFLINE", "SLAVE", 2)
+
+  // Using the same priority value indicates that these transitions can fire in any order
+  .addTransition("MASTER", "SLAVE", 3)
+  .addTransition("SLAVE", "OFFLINE", 3)
+
+  // Not specifying a value defaults to lowest priority
+  // Notice the inclusion of the OFFLINE to DROPPED transition
+  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
+  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
+
+  // Create the StateModelDefinition instance
+  .build();
+
+  // Use the isValid() function to make sure the StateModelDefinition will work without issues
+  Assert.assertTrue(stateModel.isValid());
+```