You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by jx...@apache.org on 2018/03/31 00:36:27 UTC

[6/8] helix git commit: Release note for Helix 0.8.1

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_rest_service.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_rest_service.md b/website/0.8.1/src/site/markdown/tutorial_rest_service.md
new file mode 100644
index 0000000..ca2a02e
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_rest_service.md
@@ -0,0 +1,951 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - REST Service 2.0</title>
+</head>
+
+
+
+## [Helix Tutorial](./Tutorial.html): REST Service 2.0
+
+New Helix REST service supported features:
+
+* Expose all admin operations via restful API.
+    * All of Helix admin operations, include these defined in HelixAdmin.java and ConfigAccessor.java, etc, are exposed via rest API.
+* Support all task framework API via restful.Current task framework operations are supported from rest API too.
+* More standard Restful API
+    * Use the standard HTTP methods if possible, GET, POST, PUT, DELETE, instead of customized command as it today.
+    * Customized command will be used if there is no corresponding HTTP methods, for example, rebalance a resource, disable an instance, etc.
+* Make Helix restful service an separately deployable service.
+* Enable Access/Audit log for all write access.
+
+### Installation
+The command line tool comes with helix-core package:
+
+Get the command line tool:
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix
+git checkout tags/helix-0.8.1
+./build
+cd helix-rest/target/helix-rest-pkg/bin
+chmod +x *.sh
+```
+
+Get help:
+
+```
+./run-rest-admin.sh --help
+```
+
+Start the REST server
+
+```
+./run-rest-admin.sh --port 1234 --zkSvr localhost:2121
+```
+
+### Helix REST 2.0 Endpoint
+
+Helix REST 2.0 endpoint will start with /admin/v2 prefix, and the rest will mostly follow the current URL convention.  This allows us to support v2.0 endpoint at the same time with the current Helix web interface. Some sample v2.0 endpoints would look like the following:
+
+```
+curl -X GET http://localhost:12345/admin/v2/clusters
+curl -X POST http://localhost:12345/admin/v2/clusters/myCluster
+curl -X POST http://localhost:12345/admin/v2/clusters/myCluster?command=activate&supercluster=controler_cluster
+curl http://localhost:12345/admin/v2/clusters/myCluster/resources/myResource/IdealState
+```
+### REST Endpoints and Supported Operations
+#### Operations on Helix Cluster
+
+* **"/clusters"**
+    *  Represents all helix managed clusters connected to given zookeeper
+    *  **GET** -- List all Helix managed clusters. Example: curl http://localhost:1234/admin/v2/clusters
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters
+    {
+      "clusters" : [ "cluster1", "cluster2", "cluster3"]
+    }
+    ```
+
+
+* **"/clusters/{clusterName}"**
+    * Represents a helix cluster with name {clusterName}
+    * **GET** -- return the cluster info. Example: curl http://localhost:1234/admin/v2/clusters/myCluster
+
+        ```
+        $curl http://localhost:1234/admin/v2/clusters/myCluster
+        {
+          "id" : "myCluster",
+          "paused" : true,
+          "disabled" : true,
+          "controller" : "helix.apache.org:1234",
+          "instances" : [ "aaa.helix.apache.org:1234", "bbb.helix.apache.org:1234" ],
+          "liveInstances" : ["aaa.helix.apache.org:1234"],
+          "resources" : [ "resource1", "resource2", "resource3" ],
+          "stateModelDefs" : [ "MasterSlave", "LeaderStandby", "OnlineOffline" ]
+        }
+        ```
+
+    * **PUT** – create a new cluster with {clusterName}, it returns 200 if the cluster already exists. Example: curl -X PUT http://localhost:1234/admin/v2/clusters/myCluster
+    * **DELETE** – delete this cluster.
+      Example: curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster
+    * **activate** -- Link this cluster to a Helix super (controller) cluster, i.e, add the cluster as a resource to the super cluster.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=activate&superCluster=myCluster
+    * **expand** -- In the case that a set of new node is added in the cluster, use this command to balance the resources on the existing instances to new added instances.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=expand
+    * **enable** – enable/resume the cluster.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=enable
+    * **disable** – disable/pause the cluster.
+      Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster?command=disable
+
+* **"/clusters/{clusterName}/configs"**
+    * Represents cluster level configs for cluster with {clusterName}
+    * **GET**: get all configs.
+    
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/configs
+    {
+      "id" : "myCluster",
+      "simpleFields" : {
+        "PERSIST_BEST_POSSIBLE_ASSIGNMENT" : "true"
+      },
+      "listFields" : {
+      },
+      "mapFields" : {
+      }
+    }
+    ```
+
+    * **POST**: update or delete one/some config entries.  
+    update -- Update the entries included in the input.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/configs?command=update -d '
+    {
+     "id" : "myCluster",
+      "simpleFields" : {
+        "PERSIST_BEST_POSSIBLE_ASSIGNMENT" : "true"
+      },
+      "listFields" : {
+        "disabledPartition" : ["p1", "p2", "p3"]
+      },
+      "mapFields" : {
+      }
+    }'
+    ```
+  
+      delete -- Remove the entries included in the input from current config.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/configs?command=update -d '
+    {
+      "id" : "myCluster",
+      "simpleFields" : {
+      },
+      "listFields" : {
+        "disabledPartition" : ["p1", "p3"]
+      },
+      "mapFields" : {
+      }
+    }'
+    ```
+
+* **"/clusters/{clusterName}/controller"**
+    * Represents the controller for cluster {clusterName}.
+    * **GET** – return controller information
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/controller
+    {
+      "id" : "myCluster",
+      "controller" : "test.helix.apache.org:1234",
+      "HELIX_VERSION":"0.8.1",
+      "LIVE_INSTANCE":"16261@test.helix.apache.org:1234",
+      "SESSION_ID":"35ab496aba54c99"
+    }
+    ```
+
+* **"/clusters/{clusterName}/controller/errors"**
+    * Represents error information for the controller of cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** – get all error information.
+    * **DELETE** – clean up all error logs.
+
+
+* **"/clusters/{clusterName}/controller/history"**
+    * Represents the change history of leader controller of cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** – get the leader controller history.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/controller/history
+    {
+      "id" : "myCluster",
+      "history" [
+          "{DATE=2017-03-21-16:57:14, CONTROLLER=test1.helix.apache.org:1234, TIME=1490115434248}",
+          "{DATE=2017-03-27-22:35:16, CONTROLLER=test3.helix.apache.org:1234, TIME=1490654116484}",
+          "{DATE=2017-03-27-22:35:24, CONTROLLER=test2.helix.apache.org:1234, TIME=1490654124926}"
+      ]
+    }
+    ```
+
+* **/clusters/{clusterName}/controller/messages"**
+    * Represents all uncompleted messages currently received by the controller of cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** – list all uncompleted messages received by the controller.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/controller/messages
+    {
+      "id" : "myCluster",
+      "count" : 5,
+      "messages" [
+          "0b8df4f2-776c-4325-96e7-8fad07bd9048",
+          "13a8c0af-b77e-4f5c-81a9-24fedb62cf58"
+      ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/controller/messages/{messageId}"**
+    * Represents the messages currently received by the controller of cluster {clusterName} with id {messageId}. This is new endpoint in v2.0.
+    * **GET** - get the message with {messageId} received by the controller.
+    * **DELETE** - delete the message with {messageId}
+
+
+* **"/clusters/{clusterName}/statemodeldefs/"**
+    * Represents all the state model definitions defined in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get all the state model definition in the cluster.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/statemodeldefs
+    {
+      "id" : "myCluster",
+      "stateModelDefs" : [ "MasterSlave", "LeaderStandby", "OnlineOffline" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/statemodeldefs/{statemodeldef}"**
+    * Represents the state model definition {statemodeldef} defined in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get the state model definition
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/statemodeldefs/MasterSlave
+    {
+      "id" : "MasterSlave",
+      "simpleFields" : {
+        "INITIAL_STATE" : "OFFLINE"
+      },
+      "mapFields" : {
+        "DROPPED.meta" : {
+          "count" : "-1"
+        },
+        "ERROR.meta" : {
+          "count" : "-1"
+        },
+        "ERROR.next" : {
+          "DROPPED" : "DROPPED",
+          "OFFLINE" : "OFFLINE"
+        },
+        "MASTER.meta" : {
+          "count" : "1"
+        },
+        "MASTER.next" : {
+          "SLAVE" : "SLAVE",
+          "DROPPED" : "SLAVE",
+          "OFFLINE" : "SLAVE"
+        },
+        "OFFLINE.meta" : {
+          "count" : "-1"
+        },
+        "OFFLINE.next" : {
+          "SLAVE" : "SLAVE",
+          "MASTER" : "SLAVE",
+          "DROPPED" : "DROPPED"
+        },
+        "SLAVE.meta" : {
+          "count" : "R"
+        },
+        "SLAVE.next" : {
+          "MASTER" : "MASTER",
+          "DROPPED" : "OFFLINE",
+          "OFFLINE" : "OFFLINE"
+        }
+      },
+      "listFields" : {
+        "STATE_PRIORITY_LIST" : [ "MASTER", "SLAVE", "OFFLINE", "DROPPED", "ERROR" ],
+        "STATE_TRANSITION_PRIORITYLIST" : [ "MASTER-SLAVE", "SLAVE-MASTER", "OFFLINE-SLAVE", "SLAVE-OFFLINE", "OFFLINE-DROPPED" ]
+      }
+    }
+    ```
+
+    * **POST** - add a new state model definition with {statemodeldef}
+    * **DELETE** - delete the state model definition
+
+
+#### Helix "Resource" and its sub-resources
+
+* **"/clusters/{clusterName}/resources"**
+    * Represents all resources in a cluster.
+    * **GET** - list all resources with their IdealStates and ExternViews.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources
+    {
+      "id" : "myCluster",
+      "idealstates" : [ "idealstate1", "idealstate2", "idealstate3" ],
+      "externalviews" : [ "idealstate1", "idealstate3" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/resources/{resourceName}"**
+    * Represents a resource in cluster {clusterName} with name {resourceName}
+    * **GET** - get resource info
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/resource1
+    {
+      "id" : "resource1",
+      "resourceConfig" : {},
+      "idealState" : {},
+      "externalView" : {}
+    }
+    ```
+
+    * **PUT** - add a resource with {resourceName}
+
+    ```
+    $curl -X PUT -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource -d '
+    {
+      "id":"myResource",
+      "simpleFields":{
+        "STATE_MODEL_FACTORY_NAME":"DEFAULT"
+        ,"EXTERNAL_VIEW_DISABLED":"true"
+        ,"NUM_PARTITIONS":"1"
+        ,"REBALANCE_MODE":"TASK"
+        ,"REPLICAS":"1"
+        ,"IDEAL_STATE_MODE":"AUTO"
+        ,"STATE_MODEL_DEF_REF":"Task"
+        ,"REBALANCER_CLASS_NAME":"org.apache.helix.task.WorkflowRebalancer"
+      }
+    }'
+    ```
+
+    * **DELETE** - delete a resource. Example: curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource
+    * **enable** enable the resource. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource?command=enable
+    * **disable** - disable the resource. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource?command=disable
+    * **rebalance** - rebalance the resource. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource?command=rebalance
+
+* **"/clusters/{clusterName}/resources/{resourceName}/idealState"**
+    * Represents the ideal state of a resource with name {resourceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get idealstate.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource/idealState
+    {
+      "id":"myResource"
+      ,"simpleFields":{
+        "IDEAL_STATE_MODE":"AUTO"
+        ,"NUM_PARTITIONS":"2"
+        ,"REBALANCE_MODE":"SEMI_AUTO"
+        ,"REPLICAS":"2"
+        ,"STATE_MODEL_DEF_REF":"MasterSlave"
+      }
+      ,"listFields":{
+        "myResource_0":["host1", "host2"]
+        ,"myResource_1":["host2", "host1"]
+      }
+      ,"mapFields":{
+        "myResource_0":{
+          "host1":"MASTER"
+          ,"host2":"SLAVE"
+        }
+        ,"myResource_1":{
+          "host1":"SLAVE"
+          ,"host2":"MASTER"
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/resources/{resourceName}/externalView"**
+    * Represents the external view of a resource with name {resourceName} in cluster {clusterName}
+    * **GET** - get the externview
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource/externalView
+    {
+      "id":"myResource"
+      ,"simpleFields":{
+        "IDEAL_STATE_MODE":"AUTO"
+        ,"NUM_PARTITIONS":"2"
+        ,"REBALANCE_MODE":"SEMI_AUTO"
+        ,"REPLICAS":"2"
+        ,"STATE_MODEL_DEF_REF":"MasterSlave"
+      }
+      ,"listFields":{
+        "myResource_0":["host1", "host2"]
+        ,"myResource_1":["host2", "host1"]
+      }
+      ,"mapFields":{
+        "myResource_0":{
+          "host1":"MASTER"
+          ,"host2":"OFFLINE"
+        }
+        ,"myResource_1":{
+          "host1":"SLAVE"
+          ,"host2":"MASTER"
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/resources/{resourceName}/configs"**
+    * Represents resource level of configs for resource with name {resourceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get resource configs.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/resources/myResource/configs
+    {
+      "id":"myDB"
+      "UserDefinedProperty" : "property"
+    }
+    ```
+
+#### Helix Instance and its sub-resources
+
+* **"/clusters/{clusterName}/instances"**
+    * Represents all instances in a cluster {clusterName}
+    * **GET** - list all instances in this cluster.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances
+    {
+      "id" : "myCluster",
+      "instances" : [ "host1", "host2", "host3", "host4"],
+      "online" : ["host1", "host4"],
+      "disabled" : ["host2"]
+    }
+    ```
+
+    * **POST** - enable/disable instances.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/command=enable -d
+    {
+      "instances" : [ "host1", "host3" ]
+    }
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/command=disable -d
+    {
+      "instances" : [ "host2", "host4" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}"**
+    * Represents a instance in cluster {clusterName} with name {instanceName}
+    * **GET** - get instance information.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234
+    {
+      "id" : "host_1234",
+      "configs" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host",
+        "HELIX_PORT" : "1234",
+        "HELIX_DISABLED_PARTITION" : [ ]
+      }
+      "liveInstance" : {
+        "HELIX_VERSION":"0.6.6.3",
+        "LIVE_INSTANCE":"4526@host",
+        "SESSION_ID":"359619c2d7efc14"
+      }
+    }
+    ```
+
+    * **PUT** - add a new instance with {instanceName}
+
+    ```
+    $curl -X PUT -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234 -d '
+    {
+      "id" : "host_1234",
+      "simpleFields" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host",
+        "HELIX_PORT" : "1234",
+      }
+    }'
+    ```
+  
+    There's one important restriction for this operation: the {instanceName} should match exactly HELIX_HOST + "_" + HELIX_PORT. For example, if host is localhost, and port is 1234, the instance name should be localhost_1234. Otherwise, the response won't contain any error but the configurations are not able to be filled in.
+
+    * **DELETE** - delete the instance. Example: curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234
+    * **enable** - enable the instance. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=enable
+    * **disable** - disable the instance. Example: curl -X POST http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=disable
+
+    * **addInstanceTag** -  add tags to this instance.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=addInstanceTag -d '
+    {
+      "id" : "host_1234",
+      "instanceTags" : [ "tag_1", "tag_2, "tag_3" ]
+    }'
+    ```
+
+    * **removeInstanceTag** - remove a tag from this instance.
+
+    ```
+    $curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234?command=removeInstanceTag -d '
+    {
+      "id" : "host_1234",
+      "instanceTags" : [ "tag_1", "tag_2, "tag_3" ]
+    }'
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/resources"**
+    * Represents all resources and their partitions locating on the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return all resources that have partitions in the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/resources
+    {
+      "id" : "host_1234",
+      "resources" [ "myResource1", "myResource2", "myResource3"]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/resources/{resource}"**
+    * Represents all partitions of the {resource}  locating on the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return all partitions of the resource in the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/localhost_1234/resources/myResource1
+    {
+      "id":"myResource1"
+      ,"simpleFields":{
+        "STATE_MODEL_DEF":"MasterSlave"
+        ,"STATE_MODEL_FACTORY_NAME":"DEFAULT"
+        ,"BUCKET_SIZE":"0"
+        ,"SESSION_ID":"359619c2d7f109b"
+      }
+      ,"listFields":{
+      }
+      ,"mapFields":{
+        "myResource1_2":{
+          "CURRENT_STATE":"SLAVE"
+          ,"INFO":""
+        }
+        ,"myResource1_3":{
+          "CURRENT_STATE":"MASTER"
+          ,"INFO":""
+        }
+        ,"myResource1_0":{
+          "CURRENT_STATE":"MASTER"
+          ,"INFO":""
+        }
+        ,"myResource1_1":{
+          "CURRENT_STATE":"SLAVE"
+          ,"INFO":""
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/configs"**
+    * Represents instance configs in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return configs for the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/configs 
+    {
+      "id":"host_1234"
+      "configs" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host"
+        "HELIX_PORT" : "1234",
+        "HELIX_DISABLED_PARTITION" : [ ]
+    }
+    ```
+
+    * **PUT** - PLEASE NOTE THAT THIS PUT IS FULLY OVERRIDE THE INSTANCE CONFIG
+
+    ```
+    $curl -X PUT -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/configs
+    {
+      "id":"host_1234"
+      "configs" : {
+        "HELIX_ENABLED" : "true",
+        "HELIX_HOST" : "host"
+        "HELIX_PORT" : "1234",
+        "HELIX_DISABLED_PARTITION" : [ ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/errors"**
+    * List all the mapping of sessionId to partitions of resources. This is new endpoint in v2.0.
+    * **GET** - get mapping
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/errors
+    {
+       "id":"host_1234"
+       "errors":{
+            "35sfgewngwese":{
+                "resource1":["p1","p2","p5"],
+                "resource2":["p2","p7"]
+             }
+        }
+    }
+    ```
+
+    * **DELETE** - clean up all error information from Helix.
+
+* **"/clusters/{clusterName}/instances/{instanceName}/errors/{sessionId}/{resourceName}/{partitionName}"**
+    * Represents error information for the partition {partitionName} of the resource {resourceName} under session {sessionId} in instance with {instanceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get all error information.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/errors/35sfgewngwese/resource1/p1
+    {
+      "id":"35sfgewngwese_resource1"
+      ,"simpleFields":{
+      }
+      ,"listFields":{
+      }
+      ,"mapFields":{
+        "HELIX_ERROR     20170521-070822.000561 STATE_TRANSITION b819a34d-41b5-4b42-b497-1577501eeecb":{
+          "AdditionalInfo":"Exception while executing a state transition task ..."
+          ,"MSG_ID":"4af79e51-5f83-4892-a271-cfadacb0906f"
+          ,"Message state":"READ"
+        }
+      }
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/history"**
+    * Represents instance session change history for the instance with {instanceName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - get the instance change history.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/history
+    {
+      "id": "host_1234",
+      "LAST_OFFLINE_TIME": "183948792",
+      "HISTORY": [
+        "{DATE=2017-03-02T19:25:18:915, SESSION=459014c82ef3f5b, TIME=1488482718915}",
+        "{DATE=2017-03-10T22:24:53:246, SESSION=15982390e5d5c91, TIME=1489184693246}",
+        "{DATE=2017-03-11T02:03:52:776, SESSION=15982390e5d5d85, TIME=1489197832776}",
+        "{DATE=2017-03-13T18:15:00:778, SESSION=15982390e5d678d, TIME=1489428900778}",
+        "{DATE=2017-03-21T02:47:57:281, SESSION=459014c82effa82, TIME=1490064477281}",
+        "{DATE=2017-03-27T14:51:06:802, SESSION=459014c82f01a07, TIME=1490626266802}",
+        "{DATE=2017-03-30T00:05:08:321, SESSION=5590151804e2c78, TIME=1490832308321}",
+        "{DATE=2017-03-30T01:17:34:339, SESSION=2591d53b0421864, TIME=1490836654339}",
+        "{DATE=2017-03-30T17:31:09:880, SESSION=2591d53b0421b2a, TIME=1490895069880}",
+        "{DATE=2017-03-30T18:05:38:220, SESSION=359619c2d7f109b, TIME=1490897138220}"
+      ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/messages"**
+    * Represents all uncompleted messages currently received by the instance. This is new endpoint in v2.0.
+    * **GET** - list all uncompleted messages received by the controller.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/messages
+    {
+      "id": "host_1234",
+      "new_messages": ["0b8df4f2-776c-4325-96e7-8fad07bd9048", "13a8c0af-b77e-4f5c-81a9-24fedb62cf58"],
+      "read_messages": ["19887b07-e9b8-4fa6-8369-64146226c454"]
+      "total_message_count" : 100,
+      "read_message_count" : 50
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/messages/{messageId}**
+    * Represents the messages currently received by by the instance with message given message id. This is new endpoint in v2.0.
+    * **GET** - get the message content with {messageId} received by the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/localhost_1234/messages/0b8df4f2-776c-4325-96e7-8fad07bd9048
+    {
+      "id": "0b8df4f2-776c-4325-96e7-8fad07bd9048",
+      "CREATE_TIMESTAMP":"1489997469400",
+      "ClusterEventName":"messageChange",
+      "FROM_STATE":"OFFLINE",
+      "MSG_ID":"0b8df4f2-776c-4325-96e7-8fad07bd9048",
+      "MSG_STATE":"new",
+      "MSG_TYPE":"STATE_TRANSITION",
+      "PARTITION_NAME":"Resource1_243",
+      "RESOURCE_NAME":"Resource1",
+      "SRC_NAME":"controller_1234",
+      "SRC_SESSION_ID":"15982390e5d5a76",
+      "STATE_MODEL_DEF":"LeaderStandby",
+      "STATE_MODEL_FACTORY_NAME":"myFactory",
+      "TGT_NAME":"host_1234",
+      "TGT_SESSION_ID":"459014c82efed9b",
+      "TO_STATE":"DROPPED"
+    }
+    ```
+
+    * **DELETE** - delete the message with {messageId}. Example: $curl -X DELETE http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/messages/0b8df4f2-776c-4325-96e7-8fad07bd9048
+
+* **"/clusters/{clusterName}/instances/{instanceName}/healthreports"**
+    * Represents all health reports in the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return the name of health reports collected from the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/healthreports
+    {
+      "id" : "host_1234",
+      "healthreports" [ "report1", "report2", "report3" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/instances/{instanceName}/healthreports/{reportName}"**
+    * Represents the health report with {reportName} in the instance in cluster {clusterName} with name {instanceName}. This is new endpoint in v2.0.
+    * **GET** - return the content of health report collected from the instance.
+
+    ```
+    $curl http://localhost:1234/admin/v2/clusters/myCluster/instances/host_1234/healthreports/ClusterStateStats
+    {
+      "id":"ClusterStateStats"
+      ,"simpleFields":{
+        "CREATE_TIMESTAMP":"1466753504476"
+        ,"TimeStamp":"1466753504476"
+      }
+      ,"listFields":{
+      }
+      ,"mapFields":{
+        "UserDefinedData":{
+          "Data1":"0"
+          ,"Data2":"0.0"
+        }
+      }
+    }
+    ```
+
+
+#### Helix Workflow and its sub-resources
+
+* **"/clusters/{clusterName}/workflows"**
+    * Represents all workflows in cluster {clusterName}
+    * **GET** - list all workflows in this cluster. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows
+
+    ```
+    {
+      "Workflows" : [ "Workflow1", "Workflow2" ]
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}"**
+    * Represents workflow with name {workflowName} in cluster {clusterName}
+    * **GET** - return workflow information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1
+
+    ```
+    {
+       "id" : "Workflow1",
+       "WorkflowConfig" : {
+           "Expiry" : "43200000",
+           "FailureThreshold" : "0",
+           "IsJobQueue" : "true",
+           "LAST_PURGE_TIME" : "1490820801831",
+           "LAST_SCHEDULED_WORKFLOW" : "Workflow1_20170329T000000",
+           "ParallelJobs" : "1",
+           "RecurrenceInterval" : "1",
+           "RecurrenceUnit" : "DAYS",
+           "START_TIME" : "1482176880535",
+           "STATE" : "STOPPED",
+           "StartTime" : "12-19-2016 00:00:00",
+           "TargetState" : "START",
+           "Terminable" : "false",
+           "capacity" : "500"
+        },
+       "WorkflowContext" : {
+           "JOB_STATES": {
+             "Job1": "COMPLETED",
+             "Job2": "COMPLETED"
+           },
+           "StartTime": {
+             "Job1": "1490741582339",
+             "Job2": "1490741580204"
+           },
+           "FINISH_TIME": "1490741659135",
+           "START_TIME": "1490741580196",
+           "STATE": "COMPLETED"
+       },
+       "Jobs" : ["Job1","Job2","Job3"],
+       "ParentJobs" : {
+            "Job1":["Job2", "Job3"],
+            "Job2":["Job3"]
+       }
+    }
+    ```
+
+    * **PUT** - create a workflow with {workflowName}. Example : curl -X PUT -H "Content-Type: application/json" -d [WorkflowExample.json](./WorkflowExample.json) http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1
+    * **DELETE** - delete the workflow. Example : curl -X DELETE http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1
+    * **start** - start the workflow. Example : curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=start
+    * **stop** - pause the workflow. Example : curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=stop
+    * **resume** - resume the workflow. Example : curl -X POST -H "Content-Type: application/json" http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=resume
+    * **cleanup** - cleanup all expired jobs in the workflow, this operation is only allowed if the workflow is a JobQueue. Example : curl -X POST -H "Content-Type: application/json"  http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1?command=clean
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/configs"**
+    * Represents workflow config with name {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return workflow configs. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/configs
+
+    ```
+    {
+        "id": "Workflow1",
+        "Expiry" : "43200000",
+        "FailureThreshold" : "0",
+        "IsJobQueue" : "true",
+        "START_TIME" : "1482176880535",
+        "StartTime" : "12-19-2016 00:00:00",
+        "TargetState" : "START",
+        "Terminable" : "false",
+        "capacity" : "500"
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/context"**
+    * Represents workflow runtime information with name {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return workflow runtime information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/context
+
+    ```
+    {
+        "id": "WorkflowContext",
+        "JOB_STATES": {
+             "Job1": "COMPLETED",
+             "Job2": "COMPLETED"
+         },
+         "StartTime": {
+             "Job1": "1490741582339",
+             "Job2": "1490741580204"
+         },
+         "FINISH_TIME": "1490741659135",
+         "START_TIME": "1490741580196",
+         "STATE": "COMPLETED"
+    }
+    ```
+
+
+#### Helix Job and its sub-resources
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs"**
+    * Represents all jobs in workflow {workflowName} in cluster {clusterName}
+    * **GET** return all job names in this workflow. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs
+
+    ```
+    {
+        "id":"Jobs"
+        "Jobs":["Job1","Job2","Job3"]
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs/{jobName}"**
+    * Represents job with {jobName} within {workflowName} in cluster {clusterName}
+    * **GET** return job information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1
+
+    ```
+    {
+        "id":"Job1"
+        "JobConfig":{
+            "WorkflowID":"Workflow1",
+            "IgnoreDependentJobFailure":"false",
+            "MaxForcedReassignmentsPerTask":"3"
+        },
+        "JobContext":{
+    	"START_TIME":"1491005863291",
+            "FINISH_TIME":"1491005902612",
+            "Tasks":[
+                 {
+                     "id":"0",
+                     "ASSIGNED_PARTICIPANT":"P1",
+                     "FINISH_TIME":"1491005898905"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_0"
+                 },
+                 {
+                     "id":"1",
+                     "ASSIGNED_PARTICIPANT":"P5",
+                     "FINISH_TIME":"1491005895443"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_1"
+                 }
+             ]
+         }
+    }
+    ```
+
+    * **PUT** - insert a job with {jobName} into the workflow, this operation is only allowed if the workflow is a JobQueue.  
+      Example : curl -X PUT -H "Content-Type: application/json" -d [JobExample.json](./JobExample.json) http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1
+    * **DELETE** - delete the job from the workflow, this operation is only allowed if the workflow is a JobQueue.  
+      Example : curl -X DELETE http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs/{jobName}/configs"**
+    * Represents job config for {jobName} within workflow {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return job config. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1/configs
+
+    ```
+    {
+      "id":"JobConfig"
+      "WorkflowID":"Workflow1",
+      "IgnoreDependentJobFailure":"false",
+      "MaxForcedReassignmentsPerTask":"3"
+    }
+    ```
+
+* **"/clusters/{clusterName}/workflows/{workflowName}/jobs/{jobName}/context"**
+    * Represents job runtime information with {jobName} in {workflowName} in cluster {clusterName}. This is new endpoint in v2.0.
+    * **GET** - return job runtime information. Example : curl http://localhost:1234/admin/v2/clusters/TestCluster/workflows/Workflow1/jobs/Job1/context
+
+    ```
+    {
+       "id":"JobContext":
+       "START_TIME":"1491005863291",
+       "FINISH_TIME":"1491005902612",
+       "Tasks":[
+                 {
+                     "id":"0",
+                     "ASSIGNED_PARTICIPANT":"P1",
+                     "FINISH_TIME":"1491005898905"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_0"
+                 },
+                 {
+                     "id":"1",
+                     "ASSIGNED_PARTICIPANT":"P5",
+                     "FINISH_TIME":"1491005895443"
+                     "INFO":""
+                     "NUM_ATTEMPTS":"1"
+                     "START_TIME":"1491005863307"
+                     "STATE":"COMPLETED"
+                     "TARGET":"DB_1"
+                 }
+       ]
+    }
+    ```

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_spectator.md b/website/0.8.1/src/site/markdown/tutorial_spectator.md
new file mode 100644
index 0000000..e43cd6b
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_spectator.md
@@ -0,0 +1,75 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Spectator</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Spectator
+
+Next, we\'ll learn how to implement a __spectator__.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
+
+### Start a Connection
+
+Same as for a participant, The Helix manager is the common component that connects each system component with the cluster.
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port
+* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time
+    * PARTICIPANT: Process that performs the actual task in the distributed system
+    * SPECTATOR: Process that observes the changes in the cluster
+    * ADMIN: To carry out system admin actions
+* zkConnectString: Connection string to ZooKeeper. This is of the form host1:port1,host2:port2,host3:port3
+
+After the Helix manager instance is created, the only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
+
+A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
+Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                instanceName,
+                                                InstanceType.SPECTATOR,
+                                                zkConnectString);
+manager.connect();
+RoutingTableProvider routingTableProvider = new RoutingTableProvider();
+manager.addExternalViewChangeListener(routingTableProvider);
+```
+
+### Spectator Code
+
+In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
+
+```
+// instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
+instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
+
+////////////////////////////////////////////////////////////////////////////////////////////////
+// Application-specific code to send a request to one of the instances                        //
+////////////////////////////////////////////////////////////////////////////////////////////////
+
+theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
+result = theInstance.sendRequest(yourApplicationRequest, responseObject);
+
+```
+
+When the external view changes, the application needs to react by sending requests to a different instance.

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_state.md b/website/0.8.1/src/site/markdown/tutorial_state.md
new file mode 100644
index 0000000..856b8b3
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_state.md
@@ -0,0 +1,131 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - State Machine Configuration</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): State Machine Configuration
+
+In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
+
+### State Models
+
+Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster.
+Every resource that is added should be configured to use a state model that govern its _ideal state_.
+
+#### MASTER-SLAVE
+
+* 3 states: OFFLINE, SLAVE, MASTER
+* Maximum number of masters: 1
+* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
+
+
+#### ONLINE-OFFLINE
+
+* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
+
+#### LEADER-STANDBY
+
+* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
+
+### Constraints
+
+In addition to the state machine configuration, one can specify the constraints of states and transitions.
+
+For example, one can say:
+
+* MASTER:1
+<br/>Maximum number of replicas in MASTER state at any time is 1
+
+* OFFLINE-SLAVE:5
+<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
+
+#### Dynamic State Constraints
+
+We also support two dynamic upper bounds for the number of replicas in each state:
+
+* N: The number of replicas in the state is at most the number of live participants in the cluster
+* R: The number of replicas in the state is at most the specified replica count for the partition
+
+#### State Priority
+
+Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
+
+#### State Transition Priority
+
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
+
+### Special States
+
+There are a few Helix-defined states that are important to be aware of.
+
+#### DROPPED
+
+The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
+
+* The DROPPED state must be defined
+* There must be a path to DROPPED for every state in the model
+
+#### ERROR
+
+The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
+
+### Annotated Example
+
+Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
+
+```
+StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
+  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
+  .initialState("OFFLINE")
+
+  // Lowest number here indicates highest priority, no value indicates lowest priority
+  .addState("MASTER", 1)
+  .addState("SLAVE", 2)
+  .addState("OFFLINE")
+
+  // Note the special inclusion of the DROPPED state (REQUIRED)
+  .addState(HelixDefinedState.DROPPED.toString())
+
+  // No more than one master allowed
+  .upperBound("MASTER", 1)
+
+  // R indicates an upper bound of number of replicas for each partition
+  .dynamicUpperBound("SLAVE", "R")
+
+  // Add some high-priority transitions
+  .addTransition("SLAVE", "MASTER", 1)
+  .addTransition("OFFLINE", "SLAVE", 2)
+
+  // Using the same priority value indicates that these transitions can fire in any order
+  .addTransition("MASTER", "SLAVE", 3)
+  .addTransition("SLAVE", "OFFLINE", 3)
+
+  // Not specifying a value defaults to lowest priority
+  // Notice the inclusion of the OFFLINE to DROPPED transition
+  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
+  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
+
+  // Create the StateModelDefinition instance
+  .build();
+
+  // Use the isValid() function to make sure the StateModelDefinition will work without issues
+  Assert.assertTrue(stateModel.isValid());
+```

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_task_framework.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_task_framework.md b/website/0.8.1/src/site/markdown/tutorial_task_framework.md
new file mode 100644
index 0000000..9659ada
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_task_framework.md
@@ -0,0 +1,382 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Task Framework</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Task Framework
+
+Task framework, in Helix, provides executable task scheduling and workflow management. In Helix, three layers of task abstraction have been offered to user for defining their logics of dependencies. The graph shows the relationships between three layers. Workflow can contain multiple jobs. One job can depend on other one. Multiple tasks, including same task different partition and different task different partition, can be added in one job.
+Task framework not only can abstract three layers task logics but also helps doing task assignment and rebalancing. User can create a workflow (or a job queue) at first beginning. Then jobs can be added into workflow. Those jobs contain the executable tasks implemented by user. Once workflow is completed, Helix will schedule the works based on the condition user provided.
+
+![Task Framework flow chart](./images/TaskFrameworkLayers.png)
+
+### Key Concepts
+* Task is the basic unit in Helix task framework. It can represents the a single runnable logics that user prefer to execute for each partition (distributed units).
+* Job defines one time operation across all the partitions. It contains multiple Tasks and configuration of tasks, such as how many tasks, timeout per task and so on.
+* Workflow is directed acyclic graph represents the relationships and running orders of Jobs. In addition, a workflow can also provide customized configuration, for example, Job dependencies.
+* JobQueue is another type of Workflow. Different from normal one, JobQueue is not terminated until user kill it. Also JobQueue can keep accepting newly coming jobs.
+
+### Implement Your Task
+
+#### [Task Interface](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/Task.java)
+
+The task interface contains two methods: run and cancel. User can implement his or her own logic in run function and cancel / roll back logic in cancel function.
+
+```
+public class MyTask implements Task {
+  @Override
+  TaskResult run() {
+    // Task logic
+  }
+ 
+  @Override
+  void cancel() {
+    // Cancel logic
+  }
+}
+```
+
+#### [TaskConfig](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/TaskConfig.java)
+
+In helix, usually an object config represents the abstraction of that object, such as TaskConfig, JobConfig and WorkflowConfig. TaskConfig contains configurable task conditions. TaskConfig does not require to have any input to create a new object:
+
+```
+TaskConfig taskConfig = new TaskConfig(null, null, null, null);
+```
+
+For these four fields:
+* Command: The task command, will use Job command if this is null
+* ID: Task unique id, will generate a new ID for this task if input is null
+* TaskTargetPartition: Target partition of a target. Could be null
+* ConfigMap: Task property key-value map containing all other property stated above, such as command, ID.
+
+#### Share Content Across Tasks and Jobs
+
+Task framework also provides a feature that user can store the key-value data per task, job and workflow. The content stored at workflow layer can shared by different jobs belong to this workflow. Similarly content persisted at job layer can shared by different tasks nested in this job. Currently, user can extend the abstract class [UserContentStore](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/UserContentStore.java) and use two methods putUserContent and getUserContent. It will similar to hash map put and get method except a Scope.  The Scope will define which layer this key-value pair to be persisted.
+
+```
+public class MyTask extends UserContentStore implements Task {
+  @Override
+  TaskResult run() {
+    putUserContent("KEY", "WORKFLOWVALUE", SCOPE.WORKFLOW);
+    putUserContent("KEY", "JOBVALUE", SCOPE.JOB);
+    putUserContent("KEY", "TASKVALUE", SCOPE.TASK);
+    String taskValue = getUserContent("KEY", SCOPE.TASK);
+  }
+ ...
+}
+```
+
+#### Return [Task Results](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/TaskResult.java)
+
+User can define the TaskResult for a task once it is at final stage (complete or failed). The TaskResult contains two fields: status and info. Status is current Task Status including COMPLETED, CANCELLED, FAILED and FATAL_FAILED. The difference between FAILED and FATAL_FAILED is that once the task defined as FATAL_FAILED, helix will not do the retry for this task and abort it. The other field is information, which is a String type. User can pass any information including error message, description and so on.
+
+```
+TaskResult run() {
+    ....
+    return new TaskResult(TaskResult.Status.FAILED, "ERROR MESSAGE OR OTHER INFORMATION");
+}
+```
+
+#### Task Retry and Abort
+
+Helix provides retry logics to users. User can specify the how many times allowed to tolerant failure of tasks under a job. It is a method will be introduced in Following Job Section. Another choice offered to user that if user thinks a task is very critical and do not want to do the retry once it is failed, user can return a TaskResult stated above with FATAL_FAILED status. Then Helix will not do the retry for that task.
+
+```
+return new TaskResult(TaskResult.Status.FATAL_FAILED, "DO NOT WANT TO RETRY, ERROR MESSAGE");
+```
+
+#### [TaskDriver](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/TaskDriver.java)
+
+All the control operation related to workflow and job are based on TaskDriver object. TaskDriver offers several APIs to controller, modify and track the tasks. Those APIs will be introduced in each section when they are necessary. TaskDriver object can be created either by [HelixManager](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/HelixManager.java) or [ZkClient](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/manager/zk/ZkClient.java) with cluster name:
+
+```
+HelixManager manager = new ZKHelixManager(CLUSTER_NAME, INSTANCE_NAME, InstanceType.PARTICIPANT, ZK_ADDRESS);
+TaskDriver taskDriver1 = new TaskDriver(manager);
+ 
+TaskDriver taskDriver2 = new TaskDriver(zkclient, CLUSTER_NAME);
+```
+
+#### Propagate Task Error Message to Helix
+
+When task encounter an error, it could be returned by TaskResult. Unfortunately, user can not get this TaskResult object directly. But Helix provides error messages persistent. Thus user can fetch the error messages from Helix via TaskDriver, which introduced above. The error messages will be stored in Info field per Job. Thus user have to get JobContext, which is the job status and result object.
+
+```
+taskDriver.getJobContext("JOBNAME").getInfo();
+```
+
+### Creating a Workflow
+
+#### One-time Workflow
+
+As common use, one-time workflow will be the default workflow as user created. The first step is to create a WorkflowConfig.Builder object with workflow name. Then all configs can be set in WorkflowConfig.Builder. Once the configuration is done, [WorkflowConfig](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/WorkflowConfig.java) object can be got from WorkflowConfig.Builder object.
+We have two rules to validate the Workflow configuration:
+* Expiry time should not be less than 0
+* Schedule config should be valid either one-time or a positive interval magnitude (Recurrent workflow)
+Example:
+
+```
+Workflow.Builder myWorkflowBuilder = new Workflow.Builder("MyWorkflow");
+myWorkflowBuilder.setExpiry(5000L);
+Workflow myWorkflow = myWorkflowBuilder.build();
+```
+
+#### Recurrent Workflow
+
+Recurrent workflow is the workflow scheduled periodically. The only config different from One-time workflow is to set a recurrent [ScheduleConfig](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/ScheduleConfig.java). There two methods in ScheduleConfig can help you to create a ScheduleConfig object: recurringFromNow and recurringFromDate. Both of them needs recurUnit (time unit for recurrent) and recurInteval (magnitude of recurrent interval). Here's the example:
+
+```
+ScheduleConfig myConfig1 = ScheduleConfig.recurringFFromNow(TimeUnit.MINUTES, 5L);
+ScheduleConfig myConfig2 = ScheduleConfig.recurringFFromDate(Calendar.getInstance.getTime, TimeUnit.HOURS, 10L);
+```
+
+Once this schedule config is created. It could be set in the workflow config:
+
+```
+Workflow.Builder myWorkflowBuilder = new Workflow.Builder("MyWorkflow");
+myWorkflowBuilder.setExpiry(2000L)
+                 .setScheduleConfig(ScheduleConfig.recurringFromNow(TimeUnit.DAYS, 5));
+Workflow myWorkflow = myWorkflowBuilder.build();
+```
+
+#### Start a Workflow
+
+Start a workflow is just using taskdrive to start it. Since this is an async call, after start the workflow, user can keep doing actions.
+
+```
+taskDriver.start(myWorkflow);
+```
+
+#### Stop a Workflow
+
+Stop workflow can be executed via TaskDriver:
+
+```
+taskDriver.stop(myWorkflow);
+```
+
+#### Resume a Workflow
+
+Once the workflow is stopped, it does not mean the workflow is gone. Thus user can resume the workflow that has been stopped. Using TaskDriver resume the workflow:
+
+```
+taskDriver.resume(myWorkflow);
+```
+
+#### Delete a Workflow
+
+Similar to start, stop and resume, delete operation is supported by TaskDriver.
+
+```
+taskDriver.delete(myWorkflow);
+```
+
+#### Add a Job
+
+WARNING: Job can only be added to WorkflowConfig.Builder. Once WorkflowConfig built, no job can be added! For creating a Job, please refering following section (Create a Job)
+
+```
+myWorkflowBuilder.addJob("JobName", jobConfigBuilder);
+```
+
+#### Add a Job dependency
+
+Jobs can have dependencies. If one job2 depends job1, job2 will not be scheduled until job1 finished.
+
+```
+myWorkflowBuilder.addParentChildDependency(ParentJobName, ChildJobName);
+```
+
+#### Schedule a workflow for executing in a future time
+
+Application can create a workflow with a ScheduleConfig so as to schedule it to be executed in a future time.
+
+```
+myWorkflowBuilder.setScheduleConfig(ScheduleConfig.oneTimeDelayedStart(new Date(inFiveSeconds)));
+```
+
+#### Additional Workflow Options
+
+| Additional Config Options | Detail |
+| ------------------------- | ------ |
+| _setJobDag(JobDag v)_ | If user already defined the job DAG, it could be set with this method. |
+| _setExpiry(long v, TimeUnit unit)_ | Set the expiration time for this workflow. |
+| _setFailureThreshold(int failureThreshold)_ | Set the failure threshold for this workflow, once job failures reach this number, the workflow will be failed. |
+| _setWorkflowType(String workflowType)_ | Set the user defined workflowType for this workflow. |
+| _setTerminable(boolean isTerminable)_ | Set the whether this workflow is terminable or not. |
+| _setCapacity(int capacity)_ | Set the number of jobs that workflow can hold before reject further jobs. Only used when workflow is not terminable. |
+| _setTargetState(TargetState v)_ | Set the final state of this workflow. |
+
+### Creating a Queue
+
+[Job queue](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/JobQueue.java) is another shape of workflow. Here listed different between a job queue and workflow:
+
+| Property | Workflow | Job Queue |
+| -------- | -------- | --------- |
+| Existing time | Workflow will be deleted after it is done. | Job queue will be there until user delete it. |
+| Add jobs | Once workflow is build, no job can be added. | Job queue can keep accepting jobs. |
+| Parallel run | Allows parallel run for jobs without dependencies | No parallel run allowed except setting _ParallelJobs_ |
+
+For creating a job queue, user have to provide queue name and workflow config (please refer above Create a Workflow). Similar to other task object, create a JobQueue.Builder first. Then JobQueue can be validated and generated via build function.
+
+```
+WorkflowConfig.Builder myWorkflowCfgBuilder = new WorkflowConfig.Builder().setWorkFlowType("MyType");
+JobQueue jobQueue = new JobQueue.Builder("MyQueueName").setWorkflowConfig(myWorkflowCfgBuilder.build()).build();
+```
+
+####Append Job to Queue
+
+WARNING:Different from normal workflow, job for JobQueue can be append even in anytime. Similar to workflow add a job, job can be appended via enqueueJob function via TaskDriver.
+
+```
+jobQueueBuilder.enqueueJob("JobName", jobConfigBuilder);
+```
+
+####Delete Job from Queue
+
+Helix allowed user to delete a job from existing queue. We offers delete API in TaskDriver to do this. Delete job from queue and this queue has to be stopped. Then user can resume the job once delete success.
+
+```
+taskDriver.stop("QueueName");
+taskDriver.deleteJob("QueueName", "JobName");
+taskDriver.resume("QueueName");
+```
+
+####Additional Option for JobQueue
+
+_setParallelJobs(int parallelJobs)_ : Set the how many jobs can parallel running, except there is any dependencies.
+
+###Create a Job
+
+Before generate a [JobConfig](https://github.com/apache/helix/blob/helix-0.6.x/helix-core/src/main/java/org/apache/helix/task/JobConfig.java) object, user still have to use JobConfig.Builder to build JobConfig.
+
+```
+JobConfig.Builder myJobCfgBuilder = new JobConfig.Builder();
+JobConfig myJobCfg = myJobCfgBuilder.build();
+```
+
+Helix has couple rules to validate a job:
+* Each job must at least have one task to execute. For adding tasks and task rules please refer following section Add Tasks.
+* Task timeout should not less than zero.
+* Number of concurrent tasks per instances should not less than one.
+* Maximum attempts per task should not less than one
+* There must be a workflow name
+
+#### Add Tasks
+
+There are two ways of adding tasks:
+* Add by TaskConfig. Tasks can be added via adding TaskConfigs. User can create a List of TaskConfigs or add TaskConfigMap, which is a task id to TaskConfig mapping.
+
+```
+TaskConfig taskCfg = new TaskConfig(null, null, null, null);
+List<TaskConfig> taskCfgs = new ArrayList<TaskConfig>();
+myJobCfg.addTaskConfigs(taskCfgs);
+ 
+Map<String, TaskConfig> taskCfgMap = new HashMap<String, TaskConfig>();
+taskCfgMap.put(taskCfg.getId(), taskCfg);
+myJobCfg.addTaskConfigMap(taskCfgMap);
+```
+
+* Add by Job command. If user does not want to specify each TaskConfig, we can create identical tasks based on Job command with number of tasks.
+
+```
+myJobCfg.setCommand("JobCommand").setNumberOfTasks(10);
+```
+WARNING: Either user provides TaskConfigs / TaskConfigMap or both of Job command and number tasks (except Targeted Job, refer following section) . Otherwise, validation will be failed.
+
+#### Generic Job
+
+Generic Job is the default job created. It does not have targeted resource. Thus this generic job could be assigned to one of eligble instances.
+
+#### Targeted Job
+
+Targeted Job has set up the target resource. For this kind of job, Job command is necessary, but number of tasks is not. The tasks will depends on the partion number of targeted resource. To set target resource, just put target resource name to JobConfig.Builder.
+
+```
+myJobCfgBuilder.setTargetResource("TargetResourceName");
+```
+
+In addition, user can specify the instance target state. For example, if user want to run the Task on "Master" state instance, setTargetPartitionState method can help to set the partition to assign to specific instance.
+
+```
+myJobCfgBuilder.setTargetPartitionState(Arrays.asList(new String[]{"Master", "Slave"}));
+```
+
+#### Instance Group
+
+Grouping jobs with targeted group of instances feature has been supported. User firstly have to define the instance group tag for instances, which means label some instances with specific tag. Then user can put those tags to a job that only would like to assigned to those instances. For example, customer data only available on instance 1, 2, 3. These three instances can be tagged as "CUSTOMER" and  customer data related jobs can set  the instance group tag "CUSTOMER". Thus customer data related jobs will only assign to instance 1, 2, 3. 
+To add instance group tag, just set it in JobConfig.Builder:
+
+```
+jobCfg.setInstanceGroupTag("INSTANCEGROUPTAG");
+```
+
+#### Delayed scheduling job
+
+Set up a schedule plan for the job.
+If both items are set, Helix will calculate and use the later one. 
+
+```
+myJobCfgBuilder.setExecutionDelay(delayMs);
+myJobCfgBuilder.setExecutionStart(startTimeMs);
+```
+
+Note that the scheduled job needs to be runnable first. Then Helix will start checking it's configuration for scheduling.
+If any parent jobs are not finished, the job won't be scheduled even the scheduled timestamp has already passed.
+
+#### Additional Job Options
+
+| Operation | Detail |
+| --------- | ------ |
+| _setWorkflow(String workflowName)_ | Set the workflow that this job belongs to |
+| _setTargetPartions(List\<String\> targetPartionNames)_ | Set list of partition names |
+| _setTargetPartionStates(Set\<String\>)_ | Set the partition states |
+| _setCommand(String command)_ | Set the job command |
+| _setJobCommandConfigMap(Map\<String, String\> v)_ | Set the job command config maps |
+| _setTimeoutPerTask(long v)_ | Set the timeout for each task |
+| _setNumConcurrentTasksPerInstance(int v)_ | Set number of tasks can concurrent run on same instance |
+| _setMaxAttemptsPerTask(int v)_ | Set times of retry for a task |
+| _setFailureThreshold(int v)_ | Set failure tolerance of tasks for this job |
+| _setTaskRetryDelay(long v)_ | Set the delay time before a task retry |
+| _setIgnoreDependentJobFailure(boolean ignoreDependentJobFailure)_ | Set whether ignore the job failure of parent job of this job |
+| _setJobType(String jobType)_ | Set the job type of this job |
+| _setExecutionDelay(String delay)_ | Set the delay time to schedule job execution |
+| _setExecutionStart(String start)_ | Set the start time to schedule job execution |
+
+### Monitor the status of your job
+As we introduced the excellent util TaskDriver in Workflow Section, we have extra more functionality that provided to user. The user can synchronized wait Job or Workflow until it reaches certain STATES. The function Helix have API pollForJobState and pollForWorkflowState. For pollForJobState, it accepts arguments:
+* Workflow name, required
+* Job name, required
+* Timeout, not required, will be three minutes if user choose function without timeout argument. Time unit is milisecond.
+* TaskStates, at least one state. This function can accept multiple TaskState, will end function until one of those TaskState reaches.
+For example:
+
+```
+taskDriver.pollForJobState("MyWorkflowName", "MyJobName", 180000L, TaskState.FAILED, TaskState.FATAL_FAILED);
+taskDriver.pollForJobState("MyWorkflowName", "MyJobName", TaskState.COMPLETED);
+```
+
+For pollForWorkflowState, it accepts similar arguments except Job name. For example:
+
+```
+taskDriver.pollForWorkflowState("MyWorkflowName", 180000L, TaskState.FAILED, TaskState.FATAL_FAILED);
+taskDriver.pollForWorkflowState("MyWorkflowName", TaskState.COMPLETED);
+```

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_task_throttling.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_task_throttling.md b/website/0.8.1/src/site/markdown/tutorial_task_throttling.md
new file mode 100644
index 0000000..e9029d9
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_task_throttling.md
@@ -0,0 +1,41 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Task Throttling</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Task Throttling
+
+In this chapter, we\'ll learn how to control the parallel execution of tasks in the task framework.
+
+### Task Throttling Configuration
+
+Helix can control the number of tasks that are executed in parallel according to multiple thresholds.
+Applications can set these thresholds in the following configuration items:
+
+* JobConfig.ConcurrentTasksPerInstance The number of concurrent tasks in this job that are allowed to run on an instance.
+* InstanceConfig.MAX_CONCURRENT_TASK The number of total concurrent tasks that are allowed to run on an instance.
+
+Also see [WorkflowConfig.ParallelJobs](./tutorial_task_framework.html).
+
+### Job Priority for Task Throttling
+
+Whenever there are too many tasks to be scheduled according to the threshold, Helix will prioritize the older jobs.
+The age of a job is calculated based on the job start time.

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_throttling.md b/website/0.8.1/src/site/markdown/tutorial_throttling.md
new file mode 100644
index 0000000..16a6f81
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_throttling.md
@@ -0,0 +1,39 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Throttling</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Throttling
+
+In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge (i.e. Helix) is capable of coordinating this decision.
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be lightweight, but some might involve moving data, which is quite expensive from a network and IOPS perspective.
+
+Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per-node maximum transitions in parallel
+
+

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_ui.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_ui.md b/website/0.8.1/src/site/markdown/tutorial_ui.md
new file mode 100644
index 0000000..ba63a8f
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_ui.md
@@ -0,0 +1,118 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Helix UI Setup</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): Helix UI Setup
+
+Helix now provides a modern web user interface for users to manage Helix clusters in a more convenient way (aka Helix UI). Currently the following features are supported via Helix UI:
+
+* View all Helix clusters exposed by Helix REST service
+* View detailed cluster information
+* View resources / instances in a Helix cluster
+* View partition placement and health status in a resource
+* Create new Helix clusters
+* Enable / Disable a cluster / resource / instance
+* Add an instance into a Helix cluster
+
+### Prerequisites
+
+Since Helix UI is talking with Helix REST service to manage Helix clusters, a well deployed Helix REST service is required and necessary. Please refer to this tutorial to setup a functional Helix REST service: [Helix REST Service 2.0](./tutorial_rest_service.html).
+
+### Installation
+
+To get and run Helix UI locally, simply use the following command lines:
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/helix.git
+cd helix/helix-front
+git checkout tags/helix-0.8.1
+../build
+cd target/helix-front-pkg/bin
+chmod +x *.sh
+```
+
+### Configuration
+
+Helix UI does not need any configuration if you have started Helix REST service without specifying a port ( Helix REST service will be serving through http://localhost:8100/admin/v2 ). If you have specified a customized port or you need to wire in additional REST services, please navigate to `../dist/server/config.js` and edit the following section accordingly:
+
+```
+...
+exports.HELIX_ENDPOINTS = {
+  <service nickname>: [
+    {
+      <nickname of REST endpoint>: '<REST endpoint url>'
+    }
+  ]
+};
+...
+```
+
+For example, if you have multiple Helix REST services deployed (all listening on port 12345), and you want to divide them into two services, and each service will contain two groups (e.g. staging and production), and each group will contain two fabrics as well, you may configure the above section like this:
+
+```
+...
+exports.HELIX_ENDPOINTS = {
+  service1: [
+    {
+        staging1: 'http://staging1.service1.com:12345/admin/v2',
+        staging2: 'http://staging2.service1.com:12345/admin/v2'
+    },
+    {
+        production1: 'http://production1.service1.com:12345/admin/v2',
+        production2: 'http://production2.service1.com:12345/admin/v2'
+    }
+  ],
+  service2: [
+    {
+        staging1: 'http://staging1.service2.com:12345/admin/v2',
+        staging2: 'http://staging2.service2.com:12345/admin/v2'
+    },
+    {
+        production1: 'http://production1.service2.com:12345/admin/v2',
+        production2: 'http://production2.service2.com:12345/admin/v2'
+    }
+  ]
+};
+...
+
+```
+
+
+### Launch Helix UI
+
+```
+./start-helix-ui.sh
+```
+
+Helix UI will be listening on your port `3000` by default. Just use any browser to navigate to http://localhost:3000 to get started.
+
+### Introduction
+
+The primary UI will look like this:
+
+![UI Screenshot](./images/UIScreenshot.png)
+
+The left side is the cluster list, and the right side is the detailed cluster view if you click one on the left. You will find resource list, workflow list and instance list of the cluster as well as the cluster configurations.
+
+When navigating into a single resource, Helix UI will show the partition placement with comparison of idealStates and externalViews like this:
+
+![UI Screenshot](./images/UIScreenshot2.png)

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_user_content_store.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_user_content_store.md b/website/0.8.1/src/site/markdown/tutorial_user_content_store.md
new file mode 100644
index 0000000..81c502b
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_user_content_store.md
@@ -0,0 +1,67 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - User Defined Content Store for Tasks</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): User Defined Content Store for Tasks
+
+The purpose of user defined content store is to provide an easy use feature for some task dedicated meta temporary store.
+In this chapter, we\'ll learn how to implement and use content store in the user defined tasks.
+
+### Content Store Implementation
+
+Extends abstract class UserContentStore.
+    
+    private static class ContentStoreTask extends UserContentStore implements Task {
+      @Override public TaskResult run() {
+        ...
+      }
+      @Override public void cancel() {
+        ...
+      }
+    }
+    
+The default methods support 3 types of scopes:
+1. WORKFLOW: Define the content store in workflow level
+2. JOB: Define the content store in job level
+3. TASK: Define the content store in task level
+
+### Content Store Usage
+
+Access content store in Task.run() method.
+
+      private static class ContentStoreTask extends UserContentStore implements Task {
+        @Override public TaskResult run() {
+          // put values into the store
+          putUserContent("ContentTest", "Value1", Scope.JOB);
+          putUserContent("ContentTest", "Value2", Scope.WORKFLOW);
+          putUserContent("ContentTest", "Value3", Scope.TASK);
+          
+          // get the values with the same key in the different scopes
+          if (!getUserContent("ContentTest", Scope.JOB).equals("Value1") ||
+              !getUserContent("ContentTest", Scope.WORKFLOW).equals("Value2") ||
+              !getUserContent("ContentTest", Scope.TASK).equals("Value3")) {
+            return new TaskResult(TaskResult.Status.FAILED, null);
+          }
+          
+          return new TaskResult(TaskResult.Status.COMPLETED, null);
+        }
+      }

http://git-wip-us.apache.org/repos/asf/helix/blob/8bdfc912/website/0.8.1/src/site/markdown/tutorial_user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/website/0.8.1/src/site/markdown/tutorial_user_def_rebalancer.md b/website/0.8.1/src/site/markdown/tutorial_user_def_rebalancer.md
new file mode 100644
index 0000000..2149739
--- /dev/null
+++ b/website/0.8.1/src/site/markdown/tutorial_user_def_rebalancer.md
@@ -0,0 +1,172 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - User-Defined Rebalancing</title>
+</head>
+
+## [Helix Tutorial](./Tutorial.html): User-Defined Rebalancing
+
+Even though Helix can compute both the location and the state of replicas internally using a default fully-automatic rebalancer, specific applications may require rebalancing strategies that optimize for different requirements. Thus, Helix allows applications to plug in arbitrary rebalancer algorithms that implement a provided interface. One of the main design goals of Helix is to provide maximum flexibility to any distributed application. Thus, it allows applications to fully implement the rebalancer, which is the core constraint solver in the system, if the application developer so chooses.
+
+Whenever the state of the cluster changes, as is the case when participants join or leave the cluster, Helix automatically calls the rebalancer to compute a new mapping of all the replicas in the resource. When using a pluggable rebalancer, the only required step is to register it with Helix. Subsequently, no additional bootstrapping steps are necessary. Helix uses reflection to look up and load the class dynamically at runtime. As a result, it is also technically possible to change the rebalancing strategy used at any time.
+
+The Rebalancer interface is as follows:
+
+```
+void init(HelixManager manager);
+
+IdealState computeNewIdealState(String resourceName, IdealState currentIdealState,
+    final CurrentStateOutput currentStateOutput, final ClusterDataCache clusterData);
+```
+The first parameter is the resource to rebalance, the second is pre-existing ideal mappings, the third is a snapshot of the actual placements and state assignments, and the fourth is a full cache of all of the cluster data available to Helix. Internally, Helix implements the same interface for its own rebalancing routines, so a user-defined rebalancer will be cognizant of the same information about the cluster as an internal implementation. Helix strives to provide applications the ability to implement algorithms that may require a large portion of the entire state of the cluster to make the best placement and state assignment decisions possible.
+
+An IdealState is a full representation of the location of each replica of each partition of a given resource. This is a simple representation of the placement that the algorithm believes is the best possible. If the placement meets all defined constraints, this is what will become the actual state of the distributed system.
+
+### Specifying a Rebalancer
+For implementations that set up the cluster through existing code, the following HelixAdmin calls will update the Rebalancer class:
+
+```
+IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
+idealState.setRebalanceMode(RebalanceMode.USER_DEFINED);
+idealState.setRebalancerClassName(className);
+helixAdmin.setResourceIdealState(clusterName, resourceName, idealState);
+```
+
+There are two key fields to set to specify that a pluggable rebalancer should be used. First, the rebalance mode should be set to USER_DEFINED, and second the rebalancer class name should be set to a class that implements Rebalancer and is within the scope of the project. The class name is a fully-qualified class name consisting of its package and its name. Without specification of the USER_DEFINED mode, the user-defined rebalancer class will not be used even if specified. Furthermore, Helix will not attempt to rebalance the resources through its standard routines if its mode is USER_DEFINED, regardless of whether or not a rebalancer class is registered.
+
+### Example
+
+In the next release (0.7.0), we will provide a full recipe of a user-defined rebalancer in action.
+
+Consider the case where partitions are locks in a lock manager and 6 locks are to be distributed evenly to a set of participants, and only one participant can hold each lock. We can define a rebalancing algorithm that simply takes the modulus of the lock number and the number of participants to evenly distribute the locks across participants. Helix allows capping the number of partitions a participant can accept, but since locks are lightweight, we do not need to define a restriction in this case. The following is a succinct implementation of this algorithm.
+
+```
+@Override
+IdealState computeNewIdealState(String resourceName, IdealState currentIdealState,
+    final CurrentStateOutput currentStateOutput, final ClusterDataCache clusterData) {
+  // Get the list of live participants in the cluster
+  List<String> liveParticipants = new ArrayList<String>(clusterData.getLiveInstances().keySet());
+
+  // Count the number of participants allowed to lock each lock (in this example, this is 1)
+  int lockHolders = Integer.parseInt(currentIdealState.getReplicas());
+
+  // Fairly assign the lock state to the participants using a simple mod-based sequential
+  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
+  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
+  // number of participants as necessary.
+  int i = 0;
+  for (String partition : currentIdealState.getPartitionSet()) {
+    List<String> preferenceList = new ArrayList<String>();
+    for (int j = i; j < i + lockHolders; j++) {
+      int participantIndex = j % liveParticipants.size();
+      String participant = liveParticipants.get(participantIndex);
+      // enforce that a participant can only have one instance of a given lock
+      if (!preferenceList.contains(participant)) {
+        preferenceList.add(participant);
+      }
+    }
+    currentIdealState.setPreferenceList(partition, preferenceList);
+    i++;
+  }
+  return assignment;
+}
+```
+
+Here are the IdealState preference lists emitted by the user-defined rebalancer for a 3-participant system whenever there is a change to the set of participants.
+
+* Participant_A joins
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_A"],
+  "lock_2": ["Participant_A"],
+  "lock_3": ["Participant_A"],
+  "lock_4": ["Participant_A"],
+  "lock_5": ["Participant_A"],
+}
+```
+
+A preference list is a mapping for each resource of partition to the participants serving each replica. The state model is a simple LOCKED/RELEASED model, so participant A holds all lock partitions in the LOCKED state.
+
+* Participant_B joins
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_B"],
+  "lock_2": ["Participant_A"],
+  "lock_3": ["Participant_B"],
+  "lock_4": ["Participant_A"],
+  "lock_5": ["Participant_B"],
+}
+```
+
+Now that there are two participants, the simple mod-based function assigns every other lock to the second participant. On any system change, the rebalancer is invoked so that the application can define how to redistribute its resources.
+
+* Participant_C joins (steady state)
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_B"],
+  "lock_2": ["Participant_C"],
+  "lock_3": ["Participant_A"],
+  "lock_4": ["Participant_B"],
+  "lock_5": ["Participant_C"],
+}
+```
+
+This is the steady state of the system. Notice that four of the six locks now have a different owner. That is because of the naïve modulus-based assignmemt approach used by the user-defined rebalancer. However, the interface is flexible enough to allow you to employ consistent hashing or any other scheme if minimal movement is a system requirement.
+
+* Participant_B fails
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_C"],
+  "lock_2": ["Participant_A"],
+  "lock_3": ["Participant_C"],
+  "lock_4": ["Participant_A"],
+  "lock_5": ["Participant_C"],
+}
+```
+
+On any node failure, as in the case of node addition, the rebalancer is invoked automatically so that it can generate a new mapping as a response to the change. Helix ensures that the Rebalancer has the opportunity to reassign locks as required by the application.
+
+* Participant_B (or the replacement for the original Participant_B) rejoins
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_B"],
+  "lock_2": ["Participant_C"],
+  "lock_3": ["Participant_A"],
+  "lock_4": ["Participant_B"],
+  "lock_5": ["Participant_C"],
+}
+```
+
+The rebalancer was invoked once again and the resulting IdealState preference lists reflect the steady state.
+
+### Caveats
+- The rebalancer class must be available at runtime, or else Helix will not attempt to rebalance at all
+- The Helix controller will only take into account the preference lists in the new IdealState for this release. In 0.7.0, Helix rebalancers will be able to compute the full resource assignment, including the states.
+- Helix does not currently persist the new IdealState computed by the user-defined rebalancer. However, the Helix property store is available for saving any computed state. In 0.7.0, Helix will persist the result of running the rebalancer.