You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by ka...@apache.org on 2013/11/16 03:31:58 UTC

[01/16] [HELIX-270] Include documentation for previous version on the website

Updated Branches:
  refs/heads/master 170425255 -> 80af2dedd


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_participant.md b/src/site/markdown/tutorial_participant.md
deleted file mode 100644
index d2812da..0000000
--- a/src/site/markdown/tutorial_participant.md
+++ /dev/null
@@ -1,105 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Participant</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Participant
-
-In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
-
-
-### Start the Helix agent
-
-The Helix agent is a common component that connects each system component with the controller.
-
-It requires the following parameters:
- 
-* clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
-* instanceType: Type of the process. This can be one of the following types, in this case, use PARTICIPANT
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system. 
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
-
-After the Helix manager instance is created, only thing that needs to be registered is the state model factory. 
-The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
-
-* MasterSlaveStateModelFactory
-* LeaderStandbyStateModelFactory
-* BootstrapHandler
-* _An application defined state model factory_
-
-
-```
-      manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.PARTICIPANT,
-                                                          zkConnectString);
-     StateMachineEngine stateMach = manager.getStateMachineEngine();
-
-     //create a stateModelFactory that returns a statemodel object for each partition. 
-     stateModelFactory = new OnlineOfflineStateModelFactory();     
-     stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
-     manager.connect();
-```
-
-Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
-
-```
-public class OnlineOfflineStateModelFactory extends
-        StateModelFactory<StateModel> {
-    @Override
-    public StateModel createNewStateModel(String stateUnitKey) {
-        OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
-        return stateModel;
-    }
-    @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
-    public static class OnlineOfflineStateModel extends StateModel {
-
-        @Transition(from = "OFFLINE", to = "ONLINE")
-        public void onBecomeOnlineFromOffline(Message message,
-                NotificationContext context) {
-
-            System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
-
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-            // Application logic to handle transition                                                     //
-            // For example, you might start a service, run initialization, etc                            //
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-        }
-
-        @Transition(from = "ONLINE", to = "OFFLINE")
-        public void onBecomeOfflineFromOnline(Message message,
-                NotificationContext context) {
-
-            System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
-
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-            // Application logic to handle transition                                                     //
-            // For example, you might shutdown a service, log this event, or change monitoring settings   //
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-        }
-    }
-}
-```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_propstore.md b/src/site/markdown/tutorial_propstore.md
deleted file mode 100644
index 377967f..0000000
--- a/src/site/markdown/tutorial_propstore.md
+++ /dev/null
@@ -1,34 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Application Property Store</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Application Property Store
-
-In this chapter, we\'ll learn how to use the application property store.
-
-### Property Store
-
-It is common that an application needs support for distributed, shared data structures.  Helix uses Zookeeper to store the application data and hence provides notifications when the data changes.
-
-While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
-
-See [HelixManager.getHelixPropertyStore](./apidocs/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_rebalance.md b/src/site/markdown/tutorial_rebalance.md
deleted file mode 100644
index 8f42a5a..0000000
--- a/src/site/markdown/tutorial_rebalance.md
+++ /dev/null
@@ -1,181 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Rebalancing Algorithms</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
-
-The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
-
-This means given a number of partitions, replicas and number of nodes, Helix does the automatic assignment of partition to nodes such that:
-
-* Each node has the same number of partitions
-* Replicas of the same partition do not stay on the same node
-* When a node fails, the partitions will be equally distributed among the remaining nodes
-* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above criteria
-
-Helix employs a rebalancing algorithm to compute the _ideal state_ of the system.  When the _current state_ differs from the _ideal state_, Helix uses it as the target state of the system and computes the appropriate transitions needed to bring it to the _ideal state_.
-
-Helix makes it easy to perform this operation, while giving you control over the algorithm.  In this section, we\'ll see how to implement the desired behavior.
-
-Helix has four options for rebalancing, in increasing order of customization by the system builder:
-
-* FULL_AUTO
-* SEMI_AUTO
-* CUSTOMIZED
-* USER_DEFINED
-
-```
-            |FULL_AUTO     |  SEMI_AUTO | CUSTOMIZED|  USER_DEFINED  |
-            ---------------------------------------------------------|
-   LOCATION | HELIX        |  APP       |  APP      |      APP       |
-            ---------------------------------------------------------|
-      STATE | HELIX        |  HELIX     |  APP      |      APP       |
-            ----------------------------------------------------------
-```
-
-
-### FULL_AUTO
-
-When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. 
-
-For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "REBALANCE_MODE" : "FULL_AUTO",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  }
-  "listFields" : {
-    "MyResource_0" : [],
-    "MyResource_1" : [],
-    "MyResource_2" : []
-  },
-  "mapFields" : {
-  }
-}
-```
-
-If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally.  The ideal state is therefore:
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "MyResource_0" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    },
-    "MyResource_1" : {
-      "N2" : "MASTER",
-      "N3" : "SLAVE",
-    },
-    "MyResource_2" : {
-      "N3" : "MASTER",
-      "N1" : "SLAVE",
-    }
-  }
-}
-```
-
-Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
-When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node.. 
-
-#### SEMI_AUTO
-
-When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
-
-Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2.  The choice of _state_ is still controlled by Helix.  That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "REBALANCE_MODE" : "SEMI_AUTO",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  }
-  "listFields" : {
-    "MyResource_0" : [node1, node2],
-    "MyResource_1" : [node2, node3],
-    "MyResource_2" : [node3, node1]
-  },
-  "mapFields" : {
-  }
-}
-```
-
-The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
-
-In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints. 
-
-#### CUSTOMIZED
-
-Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes. 
-Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
-
-Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "REBALANCE_MODE" : "CUSTOMIZED",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "MyResource_0" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    },
-    "MyResource_1" : {
-      "N2" : "MASTER",
-      "N3" : "SLAVE",
-    },
-    "MyResource_2" : {
-      "N3" : "MASTER",
-      "N1" : "SLAVE",
-    }
-  }
-}
-```
-
-Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2. 
-
-#### USER_DEFINED
-
-For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
-
-#### Backwards Compatibility
-
-In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_spectator.md b/src/site/markdown/tutorial_spectator.md
deleted file mode 100644
index 24c1cf4..0000000
--- a/src/site/markdown/tutorial_spectator.md
+++ /dev/null
@@ -1,76 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Spectator</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Spectator
-
-Next, we\'ll learn how to implement a Spectator.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
-
-### Start the Helix agent
-
-Same as for a Participant, The Helix agent is the common component that connects each system component with the controller.
-
-It requires the following parameters:
-
-* clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
-* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system.
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
-
-After the Helix manager instance is created, only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
-
-### Spectator Code
-
-A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
-Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
-
-```
-manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.PARTICIPANT,
-                                                          zkConnectString);
-manager.connect();
-RoutingTableProvider routingTableProvider = new RoutingTableProvider();
-manager.addExternalViewChangeListener(routingTableProvider);
-```
-
-In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
-
-```
-## instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
-instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
-
-////////////////////////////////////////////////////////////////////////////////////////////////
-// Application-specific code to send a request to one of the instances                        //
-////////////////////////////////////////////////////////////////////////////////////////////////
-
-theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
-result = theInstance.sendRequest(yourApplicationRequest, responseObject);
-
-```
-
-When the external view changes, the application needs to react by sending requests to a different instance.  
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_state.md b/src/site/markdown/tutorial_state.md
deleted file mode 100644
index 4f7b1b5..0000000
--- a/src/site/markdown/tutorial_state.md
+++ /dev/null
@@ -1,131 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - State Machine Configuration</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): State Machine Configuration
-
-In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
-
-## State Models
-
-Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster. 
-Every resource that is added should be configured to use a state model that govern its _ideal state_.
-
-### MASTER-SLAVE
-
-* 3 states: OFFLINE, SLAVE, MASTER
-* Maximum number of masters: 1
-* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
-
-
-### ONLINE-OFFLINE
-
-* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
-
-### LEADER-STANDBY
-
-* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
-
-## Constraints
-
-In addition to the state machine configuration, one can specify the constraints of states and transitions.
-
-For example, one can say:
-
-* MASTER:1
-<br/>Maximum number of replicas in MASTER state at any time is 1
-
-* OFFLINE-SLAVE:5 
-<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
-
-### Dynamic State Constraints
-
-We also support two dynamic upper bounds for the number of replicas in each state:
-
-* N: The number of replicas in the state is at most the number of live participants in the cluster
-* R: The number of replicas in the state is at most the specified replica count for the partition
-
-### State Priority
-
-Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
-
-### State Transition Priority
-
-Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
-
-## Special States
-
-### DROPPED
-
-The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
-
-* The DROPPED state must be defined
-* There must be a path to DROPPED for every state in the model
-
-### ERROR
-
-The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
-
-## Annotated Example
-
-Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
-
-```
-StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
-  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
-  .initialState("OFFLINE")
-
-  // Lowest number here indicates highest priority, no value indicates lowest priority
-  .addState("MASTER", 1)
-  .addState("SLAVE", 2)
-  .addState("OFFLINE")
-
-  // Note the special inclusion of the DROPPED state (REQUIRED)
-  .addState(HelixDefinedState.DROPPED.toString())
-
-  // No more than one master allowed
-  .upperBound("MASTER", 1)
-
-  // R indicates an upper bound of number of replicas for each partition
-  .dynamicUpperBound("SLAVE", "R")
-
-  // Add some high-priority transitions
-  .addTransition("SLAVE", "MASTER", 1)
-  .addTransition("OFFLINE", "SLAVE", 2)
-
-  // Using the same priority value indicates that these transitions can fire in any order
-  .addTransition("MASTER", "SLAVE", 3)
-  .addTransition("SLAVE", "OFFLINE", 3)
-
-  // Not specifying a value defaults to lowest priority
-  // Notice the inclusion of the OFFLINE to DROPPED transition
-  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
-  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
-
-  // Create the StateModelDefinition instance
-  .build();
-
-  // Use the isValid() function to make sure the StateModelDefinition will work without issues
-  Assert.assertTrue(stateModel.isValid());
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_throttling.md b/src/site/markdown/tutorial_throttling.md
deleted file mode 100644
index 2317cf1..0000000
--- a/src/site/markdown/tutorial_throttling.md
+++ /dev/null
@@ -1,38 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Throttling</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Throttling
-
-In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge is capable of coordinating this decision.
-
-### Throttling
-
-Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight, but some might involve moving data, which is quite expensive from a network and iops perspective.
-
-Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
-
-* MessageType e.g STATE_TRANSITION
-* TransitionType e.g SLAVE-MASTER
-* Resource e.g database
-* Node i.e per-node maximum transitions in parallel
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_user_def_rebalancer.md b/src/site/markdown/tutorial_user_def_rebalancer.md
deleted file mode 100644
index 44b202a..0000000
--- a/src/site/markdown/tutorial_user_def_rebalancer.md
+++ /dev/null
@@ -1,201 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - User-Defined Rebalancing</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): User-Defined Rebalancing
-
-Even though Helix can compute both the location and the state of replicas internally using a default fully-automatic rebalancer, specific applications may require rebalancing strategies that optimize for different requirements. Thus, Helix allows applications to plug in arbitrary rebalancer algorithms that implement a provided interface. One of the main design goals of Helix is to provide maximum flexibility to any distributed application. Thus, it allows applications to fully implement the rebalancer, which is the core constraint solver in the system, if the application developer so chooses.
-
-Whenever the state of the cluster changes, as is the case when participants join or leave the cluster, Helix automatically calls the rebalancer to compute a new mapping of all the replicas in the resource. When using a pluggable rebalancer, the only required step is to register it with Helix. Subsequently, no additional bootstrapping steps are necessary. Helix uses reflection to look up and load the class dynamically at runtime. As a result, it is also technically possible to change the rebalancing strategy used at any time.
-
-The Rebalancer interface is as follows:
-
-```
-ResourceMapping computeResourceMapping(final Resource resource,
-      final IdealState currentIdealState, final CurrentStateOutput currentStateOutput,
-      final ClusterDataCache clusterData);
-```
-The first parameter is the resource to rebalance, the second is pre-existing ideal mappings, the third is a snapshot of the actual placements and state assignments, and the fourth is a full cache of all of the cluster data available to Helix. Internally, Helix implements the same interface for its own rebalancing routines, so a user-defined rebalancer will be cognizant of the same information about the cluster as an internal implementation. Helix strives to provide applications the ability to implement algorithms that may require a large portion of the entire state of the cluster to make the best placement and state assignment decisions possible.
-
-A ResourceMapping is a full representation of the location and the state of each replica of each partition of a given resource. This is a simple representation of the placement that the algorithm believes is the best possible. If the placement meets all defined constraints, this is what will become the actual state of the distributed system.
-
-### Specifying a Rebalancer
-For implementations that set up the cluster through existing code, the following HelixAdmin calls will update the Rebalancer class:
-
-```
-IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
-idealState.setRebalanceMode(RebalanceMode.USER_DEFINED);
-idealState.setRebalancerClassName(className);
-helixAdmin.setResourceIdealState(clusterName, resourceName, idealState);
-```
-There are two key fields to set to specify that a pluggable rebalancer should be used. First, the rebalance mode should be set to USER_DEFINED, and second the rebalancer class name should be set to a class that implements Rebalancer and is within the scope of the project. The class name is a fully-qualified class name consisting of its package and its name. Without specification of the USER_DEFINED mode, the user-defined rebalancer class will not be used even if specified. Furthermore, Helix will not attempt to rebalance the resources through its standard routines if its mode is USER_DEFINED, regardless of whether or not a rebalancer class is registered.
-
-Alternatively, the rebalancer class name can be specified in a YAML file representing the cluster configuration. The requirements are the same, but the representation is more compact. Below are the first few lines of an example YAML file. To see a full YAML specification, see the [YAML tutorial](./tutorial_yaml.html).
-
-```
-clusterName: lock-manager-custom-rebalancer # unique name for the cluster
-resources:
-  - name: lock-group # unique resource name
-    rebalancer: # we will provide our own rebalancer
-      mode: USER_DEFINED
-      class: domain.project.helix.rebalancer.UserDefinedRebalancerClass
-...
-```
-
-### Example
-We demonstrate plugging in a simple user-defined rebalancer as part of a revisit of the [distributed lock manager](./recipes/user_def_rebalancer.html) example. It includes a functional Rebalancer implementation, as well as the entire YAML file used to define the cluster.
-
-Consider the case where partitions are locks in a lock manager and 6 locks are to be distributed evenly to a set of participants, and only one participant can hold each lock. We can define a rebalancing algorithm that simply takes the modulus of the lock number and the number of participants to evenly distribute the locks across participants. Helix allows capping the number of partitions a participant can accept, but since locks are lightweight, we do not need to define a restriction in this case. The following is a succinct implementation of this algorithm.
-
-```
-@Override
-public ResourceAssignment computeResourceMapping(Resource resource, IdealState currentIdealState,
-    CurrentStateOutput currentStateOutput, ClusterDataCache clusterData) {
-  // Initialize an empty mapping of locks to participants
-  ResourceAssignment assignment = new ResourceAssignment(resource.getResourceName());
-
-  // Get the list of live participants in the cluster
-  List<String> liveParticipants = new ArrayList<String>(clusterData.getLiveInstances().keySet());
-
-  // Get the state model (should be a simple lock/unlock model) and the highest-priority state
-  String stateModelName = currentIdealState.getStateModelDefRef();
-  StateModelDefinition stateModelDef = clusterData.getStateModelDef(stateModelName);
-  if (stateModelDef.getStatesPriorityList().size() < 1) {
-    LOG.error("Invalid state model definition. There should be at least one state.");
-    return assignment;
-  }
-  String lockState = stateModelDef.getStatesPriorityList().get(0);
-
-  // Count the number of participants allowed to lock each lock
-  String stateCount = stateModelDef.getNumInstancesPerState(lockState);
-  int lockHolders = 0;
-  try {
-    // a numeric value is a custom-specified number of participants allowed to lock the lock
-    lockHolders = Integer.parseInt(stateCount);
-  } catch (NumberFormatException e) {
-    LOG.error("Invalid state model definition. The lock state does not have a valid count");
-    return assignment;
-  }
-
-  // Fairly assign the lock state to the participants using a simple mod-based sequential
-  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
-  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
-  // number of participants as necessary.
-  // This assumes a simple lock-unlock model where the only state of interest is which nodes have
-  // acquired each lock.
-  int i = 0;
-  for (Partition partition : resource.getPartitions()) {
-    Map<String, String> replicaMap = new HashMap<String, String>();
-    for (int j = i; j < i + lockHolders; j++) {
-      int participantIndex = j % liveParticipants.size();
-      String participant = liveParticipants.get(participantIndex);
-      // enforce that a participant can only have one instance of a given lock
-      if (!replicaMap.containsKey(participant)) {
-        replicaMap.put(participant, lockState);
-      }
-    }
-    assignment.addReplicaMap(partition, replicaMap);
-    i++;
-  }
-  return assignment;
-}
-```
-
-Here is the ResourceMapping emitted by the user-defined rebalancer for a 3-participant system whenever there is a change to the set of participants.
-
-* Participant_A joins
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_A": "LOCKED"},
-  "lock_2": { "Participant_A": "LOCKED"},
-  "lock_3": { "Participant_A": "LOCKED"},
-  "lock_4": { "Participant_A": "LOCKED"},
-  "lock_5": { "Participant_A": "LOCKED"},
-}
-```
-
-A ResourceMapping is a mapping for each resource of partition to the participant serving each replica and the state of each replica. The state model is a simple LOCKED/RELEASED model, so participant A holds all lock partitions in the LOCKED state.
-
-* Participant_B joins
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_B": "LOCKED"},
-  "lock_2": { "Participant_A": "LOCKED"},
-  "lock_3": { "Participant_B": "LOCKED"},
-  "lock_4": { "Participant_A": "LOCKED"},
-  "lock_5": { "Participant_B": "LOCKED"},
-}
-```
-
-Now that there are two participants, the simple mod-based function assigns every other lock to the second participant. On any system change, the rebalancer is invoked so that the application can define how to redistribute its resources.
-
-* Participant_C joins (steady state)
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_B": "LOCKED"},
-  "lock_2": { "Participant_C": "LOCKED"},
-  "lock_3": { "Participant_A": "LOCKED"},
-  "lock_4": { "Participant_B": "LOCKED"},
-  "lock_5": { "Participant_C": "LOCKED"},
-}
-```
-
-This is the steady state of the system. Notice that four of the six locks now have a different owner. That is because of the naïve modulus-based assignmemt approach used by the user-defined rebalancer. However, the interface is flexible enough to allow you to employ consistent hashing or any other scheme if minimal movement is a system requirement.
-
-* Participant_B fails
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_C": "LOCKED"},
-  "lock_2": { "Participant_A": "LOCKED"},
-  "lock_3": { "Participant_C": "LOCKED"},
-  "lock_4": { "Participant_A": "LOCKED"},
-  "lock_5": { "Participant_C": "LOCKED"},
-}
-```
-
-On any node failure, as in the case of node addition, the rebalancer is invoked automatically so that it can generate a new mapping as a response to the change. Helix ensures that the Rebalancer has the opportunity to reassign locks as required by the application.
-
-* Participant_B (or the replacement for the original Participant_B) rejoins
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_B": "LOCKED"},
-  "lock_2": { "Participant_C": "LOCKED"},
-  "lock_3": { "Participant_A": "LOCKED"},
-  "lock_4": { "Participant_B": "LOCKED"},
-  "lock_5": { "Participant_C": "LOCKED"},
-}
-```
-
-The rebalancer was invoked once again and the resulting ResourceMapping reflects the steady state.
-
-### Caveats
-- The rebalancer class must be available at runtime, or else Helix will not attempt to rebalance at all
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_yaml.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_yaml.md b/src/site/markdown/tutorial_yaml.md
deleted file mode 100644
index 0f8e0cc..0000000
--- a/src/site/markdown/tutorial_yaml.md
+++ /dev/null
@@ -1,102 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - YAML Cluster Setup</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): YAML Cluster Setup
-
-As an alternative to using Helix Admin to set up the cluster, its resources, constraints, and the state model, Helix supports bootstrapping a cluster configuration based on a YAML file. Below is an annotated example of such a file for a simple distributed lock manager where a lock can only be LOCKED or RELEASED, and each lock only allows a single participant to hold it in the LOCKED state.
-
-```
-clusterName: lock-manager-custom-rebalancer # unique name for the cluster (required)
-resources:
-  - name: lock-group # unique resource name (required)
-    rebalancer: # required
-      mode: USER_DEFINED # required - USER_DEFINED means we will provide our own rebalancer
-      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer # required for USER_DEFINED
-    partitions:
-      count: 12 # number of partitions for the resource (default is 1)
-      replicas: 1 # number of replicas per partition (default is 1)
-    stateModel:
-      name: lock-unlock # model name (required)
-      states: [LOCKED, RELEASED, DROPPED] # the list of possible states (required if model not built-in)
-      transitions: # the list of possible transitions (required if model not built-in)
-        - name: Unlock
-          from: LOCKED
-          to: RELEASED
-        - name: Lock
-          from: RELEASED
-          to: LOCKED
-        - name: DropLock
-          from: LOCKED
-          to: DROPPED
-        - name: DropUnlock
-          from: RELEASED
-          to: DROPPED
-        - name: Undrop
-          from: DROPPED
-          to: RELEASED
-      initialState: RELEASED # (required if model not built-in)
-    constraints:
-      state:
-        counts: # maximum number of replicas of a partition that can be in each state (required if model not built-in)
-          - name: LOCKED
-            count: "1"
-          - name: RELEASED
-            count: "-1"
-          - name: DROPPED
-            count: "-1"
-        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority (all priorities equal if not specified)
-      transition: # transitions priority to enforce order that transitions occur
-        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock] # all priorities equal if not specified
-participants: # list of nodes that can serve replicas (optional if dynamic joining is active, required otherwise)
-  - name: localhost_12001
-    host: localhost
-    port: 12001
-  - name: localhost_12002
-    host: localhost
-    port: 12002
-  - name: localhost_12003
-    host: localhost
-    port: 12003
-```
-
-Using a file like the one above, the cluster can be set up either with the command line:
-
-```
-incubator-helix/helix-core/target/helix-core/pkg/bin/YAMLClusterSetup.sh localhost:2199 lock-manager-config.yaml
-```
-
-or with code:
-
-```
-YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
-InputStream input =
-    Thread.currentThread().getContextClassLoader()
-        .getResourceAsStream("lock-manager-config.yaml");
-YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
-```
-
-Some notes:
-
-- A rebalancer class is only required for the USER_DEFINED mode. It is ignored otherwise.
-
-- Built-in state models, like OnlineOffline, LeaderStandby, and MasterSlave, or state models that have already been added only require a name for stateModel. If partition and/or replica counts are not provided, a value of 1 is assumed.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/resources/images/PFS-Generic.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/PFS-Generic.png b/src/site/resources/images/PFS-Generic.png
deleted file mode 100644
index 7eea3a0..0000000
Binary files a/src/site/resources/images/PFS-Generic.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/resources/images/RSYNC_BASED_PFS.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/RSYNC_BASED_PFS.png b/src/site/resources/images/RSYNC_BASED_PFS.png
deleted file mode 100644
index 0cc55ae..0000000
Binary files a/src/site/resources/images/RSYNC_BASED_PFS.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
index 2f3ee77..2b4a64f 100644
--- a/src/site/site.xml
+++ b/src/site/site.xml
@@ -25,7 +25,8 @@
     <href>http://incubator.apache.org/</href>
   </bannerRight>
 
-  <publishDate position="right"/>
+  <publishDate position="none"/>
+  <version position="none"/>
 
   <skin>
     <groupId>org.apache.maven.skins</groupId>
@@ -56,27 +57,28 @@
       <item name="Apache Helix" href="http://helix.incubator.apache.org/"/>
     </breadcrumbs>
 
-    <menu name="Helix">
+    <menu name="Apache Helix">
       <item name="Introduction" href="./index.html"/>
       <item name="Core concepts" href="./Concepts.html"/>
       <item name="Architecture" href="./Architecture.html"/>
-      <item name="Quick Start" href="./Quickstart.html"/>
-      <item name="Tutorial" href="./Tutorial.html"/>
-      <item name="release ${currentRelease}" href="releasenotes/release-${currentRelease}.html"/>
-      <item name="Download" href="./download.html"/>
-      <item name="IRC" href="./IRC.html"/>
+      <item name="Publications" href="./Publications.html"/>
+    </menu>
+
+    <menu name="Helix 0.7.0-incubating">
+      <item name="Quick Start" href="./site-releases/0.7.0-incubating-site/Quickstart.html"/>
+      <item name="Tutorial" href="./site-releases/0.7.0-incubating-site/Tutorial.html"/>
+      <item name="Download" href="./site-releases/0.7.0-incubating-site/download.html"/>
     </menu>
 
-    <menu name="Recipes">
-      <item name="Distributed lock manager" href="./recipes/lock_manager.html"/>
-      <item name="Rabbit MQ consumer group" href="./recipes/rabbitmq_consumer_group.html"/>
-      <item name="Rsync replicated file store" href="./recipes/rsync_replicated_file_store.html"/>
-      <item name="Service Discovery" href="./recipes/service_discovery.html"/>
-      <item name="Distributed task DAG Execution" href="./recipes/task_dag_execution.html"/>
-      <item name="User-Defined Rebalancer Example" href="./recipes/user_def_rebalancer.html"/>
+    <menu name="Releases">
+      <item name="0.7.0-incubating" href="./site-releases/0.7.0-incubating-site/index.html"/>
+      <item name="0.6.2-incubating" href="./site-releases/0.6.2-incubating-site/index.html"/>
+      <item name="0.6.1-incubating" href="./site-releases/0.6.1-incubating-site/index.html"/>
+      <item name="trunk" href="./site-releases/trunk-site/index.html"/>
     </menu>
 
     <menu name="Get Involved">
+      <item name="IRC" href="./IRC.html"/>
       <item name="Mailing Lists" href="mail-lists.html"/>
       <item name="Issues" href="issue-tracking.html"/>
       <item name="Team" href="team-list.html"/>
@@ -109,14 +111,14 @@
   <custom>
     <fluidoSkin>
       <topBarEnabled>true</topBarEnabled>
-      <!-- twitter link work only with sidebar disabled -->
-      <sideBarEnabled>false</sideBarEnabled>
       <googleSearch></googleSearch>
       <twitter>
         <user>ApacheHelix</user>
         <showUser>true</showUser>
         <showFollowers>false</showFollowers>
       </twitter>
+      <!-- twitter link work only with sidebar disabled -->
+      <sideBarEnabled>true</sideBarEnabled>
     </fluidoSkin>
   </custom>
 


[14/16] git commit: [maven-release-plugin] prepare release helix-0.7.0-incubating

Posted by ka...@apache.org.
[maven-release-plugin] prepare release helix-0.7.0-incubating


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/2c29549a
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/2c29549a
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/2c29549a

Branch: refs/heads/master
Commit: 2c29549a13dabcf8f854ff9ea763e86e9969e82e
Parents: 150ce69
Author: zzhang <zz...@apache.org>
Authored: Thu Nov 14 15:22:40 2013 -0800
Committer: Kanak Biscuitwala <ka...@apache.org>
Committed: Fri Nov 15 14:40:15 2013 -0800

----------------------------------------------------------------------
 helix-admin-webapp/pom.xml                   | 2 +-
 helix-agent/pom.xml                          | 2 +-
 helix-core/pom.xml                           | 2 +-
 helix-examples/pom.xml                       | 2 +-
 pom.xml                                      | 4 ++--
 recipes/distributed-lock-manager/pom.xml     | 2 +-
 recipes/pom.xml                              | 2 +-
 recipes/rabbitmq-consumer-group/pom.xml      | 2 +-
 recipes/rsync-replicated-file-system/pom.xml | 2 +-
 recipes/service-discovery/pom.xml            | 2 +-
 recipes/task-execution/pom.xml               | 4 ++--
 recipes/user-defined-rebalancer/pom.xml      | 2 +-
 site-releases/0.6.1-incubating/pom.xml       | 2 +-
 site-releases/pom.xml                        | 2 +-
 14 files changed, 16 insertions(+), 16 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/helix-admin-webapp/pom.xml
----------------------------------------------------------------------
diff --git a/helix-admin-webapp/pom.xml b/helix-admin-webapp/pom.xml
index b4f38b5..4bd7bef 100644
--- a/helix-admin-webapp/pom.xml
+++ b/helix-admin-webapp/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/helix-agent/pom.xml
----------------------------------------------------------------------
diff --git a/helix-agent/pom.xml b/helix-agent/pom.xml
index 7d2a0ce..e57f401 100644
--- a/helix-agent/pom.xml
+++ b/helix-agent/pom.xml
@@ -22,7 +22,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
   <artifactId>helix-agent</artifactId>
   <packaging>bundle</packaging>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/helix-core/pom.xml
----------------------------------------------------------------------
diff --git a/helix-core/pom.xml b/helix-core/pom.xml
index 6f2aeb9..07b42c7 100644
--- a/helix-core/pom.xml
+++ b/helix-core/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/helix-examples/pom.xml
----------------------------------------------------------------------
diff --git a/helix-examples/pom.xml b/helix-examples/pom.xml
index c3b319c..f1ac3c6 100644
--- a/helix-examples/pom.xml
+++ b/helix-examples/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 010023c..577ff10 100644
--- a/pom.xml
+++ b/pom.xml
@@ -29,7 +29,7 @@ under the License.
 
   <groupId>org.apache.helix</groupId>
   <artifactId>helix</artifactId>
-  <version>0.7.1-incubating-SNAPSHOT</version>
+  <version>0.7.0-incubating</version>
   <packaging>pom</packaging>
   <name>Apache Helix</name>
 
@@ -276,7 +276,7 @@ under the License.
     <connection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-helix.git</connection>
     <developerConnection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-helix.git</developerConnection>
     <url>https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=summary</url>
-    <tag>HEAD</tag>
+    <tag>helix-0.7.0-incubating</tag>
   </scm>
   <issueManagement>
     <system>jira</system>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/distributed-lock-manager/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/distributed-lock-manager/pom.xml b/recipes/distributed-lock-manager/pom.xml
index f9f6385..e676ac7 100644
--- a/recipes/distributed-lock-manager/pom.xml
+++ b/recipes/distributed-lock-manager/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>distributed-lock-manager</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/pom.xml b/recipes/pom.xml
index 70dd2bd..ac98a08 100644
--- a/recipes/pom.xml
+++ b/recipes/pom.xml
@@ -22,7 +22,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
   <groupId>org.apache.helix.recipes</groupId>
   <artifactId>recipes</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/rabbitmq-consumer-group/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/rabbitmq-consumer-group/pom.xml b/recipes/rabbitmq-consumer-group/pom.xml
index a70947d..ded3b67 100644
--- a/recipes/rabbitmq-consumer-group/pom.xml
+++ b/recipes/rabbitmq-consumer-group/pom.xml
@@ -24,7 +24,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>rabbitmq-consumer-group</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/rsync-replicated-file-system/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/rsync-replicated-file-system/pom.xml b/recipes/rsync-replicated-file-system/pom.xml
index 7d27f1f..4926489 100644
--- a/recipes/rsync-replicated-file-system/pom.xml
+++ b/recipes/rsync-replicated-file-system/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>rsync-replicated-file-system</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/service-discovery/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/service-discovery/pom.xml b/recipes/service-discovery/pom.xml
index a876614..ccdfb0e 100644
--- a/recipes/service-discovery/pom.xml
+++ b/recipes/service-discovery/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>service-discovery</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/task-execution/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/task-execution/pom.xml b/recipes/task-execution/pom.xml
index 27464c9..cace962 100644
--- a/recipes/task-execution/pom.xml
+++ b/recipes/task-execution/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>task-execution</artifactId>
@@ -39,7 +39,7 @@ under the License.
     <dependency>
       <groupId>org.apache.helix</groupId>
       <artifactId>helix-core</artifactId>
-      <version>0.7.1-incubating-SNAPSHOT</version>
+      <version>0.7.0-incubating</version>
     </dependency>
     <dependency>
       <groupId>log4j</groupId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/recipes/user-defined-rebalancer/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/user-defined-rebalancer/pom.xml b/recipes/user-defined-rebalancer/pom.xml
index 8eba035..aeb6b82 100644
--- a/recipes/user-defined-rebalancer/pom.xml
+++ b/recipes/user-defined-rebalancer/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>user-defined-rebalancer</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/site-releases/0.6.1-incubating/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/pom.xml b/site-releases/0.6.1-incubating/pom.xml
index 7efc019..d515cab 100644
--- a/site-releases/0.6.1-incubating/pom.xml
+++ b/site-releases/0.6.1-incubating/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>site-releases</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
 
   <artifactId>0.6.1-incubating-site</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/2c29549a/site-releases/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/pom.xml b/site-releases/pom.xml
index bfdb1f4..fe3905b 100644
--- a/site-releases/pom.xml
+++ b/site-releases/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.1-incubating-SNAPSHOT</version>
+    <version>0.7.0-incubating</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <packaging>pom</packaging>


[16/16] git commit: [HELIX-270] Include documentation for previous version on the website, fix htaccess

Posted by ka...@apache.org.
[HELIX-270] Include documentation for previous version on the website, fix htaccess


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/80af2ded
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/80af2ded
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/80af2ded

Branch: refs/heads/master
Commit: 80af2deddd26258c4ee3c76ad5bdeb3074c5443d
Parents: 9ec2a1f
Author: Kanak Biscuitwala <ka...@apache.org>
Authored: Fri Nov 15 18:31:17 2013 -0800
Committer: Kanak Biscuitwala <ka...@apache.org>
Committed: Fri Nov 15 18:31:41 2013 -0800

----------------------------------------------------------------------
 src/site/resources/.htaccess | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/80af2ded/src/site/resources/.htaccess
----------------------------------------------------------------------
diff --git a/src/site/resources/.htaccess b/src/site/resources/.htaccess
index d5c7bf3..8c995fb 100644
--- a/src/site/resources/.htaccess
+++ b/src/site/resources/.htaccess
@@ -18,3 +18,6 @@
 #
 
 Redirect /download.html /download.cgi
+Redirect /site-releases/0.6.1-incubating-site/download.html /site-releases/0.6.1-incubating-site/download.cgi
+Redirect /site-releases/0.6.2-incubating-site/download.html /site-releases/0.6.2-incubating-site/download.cgi
+Redirect /site-releases/0.7.0-incubating-site/download.html /site-releases/0.7.0-incubating-site/download.cgi


[15/16] git commit: Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/incubator-helix

Posted by ka...@apache.org.
Merge branch 'master' of https://git-wip-us.apache.org/repos/asf/incubator-helix


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/9ec2a1f2
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/9ec2a1f2
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/9ec2a1f2

Branch: refs/heads/master
Commit: 9ec2a1f2d5dfff6a872b36259995747b2a581e3f
Parents: 48a99a2 1704252
Author: Kanak Biscuitwala <ka...@apache.org>
Authored: Fri Nov 15 14:41:59 2013 -0800
Committer: Kanak Biscuitwala <ka...@apache.org>
Committed: Fri Nov 15 14:41:59 2013 -0800

----------------------------------------------------------------------

----------------------------------------------------------------------



[08/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/Tutorial.md b/site-releases/0.7.0-incubating/src/site/markdown/Tutorial.md
new file mode 100644
index 0000000..ee5a393
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/Tutorial.md
@@ -0,0 +1,284 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial</title>
+</head>
+
+# Helix Tutorial
+
+In this tutorial, we will cover the roles of a Helix-managed cluster, and show the code you need to write to integrate with it.  In many cases, there is a simple default behavior that is often appropriate, but you can also customize the behavior.
+
+Convention: we first cover the _basic_ approach, which is the easiest to implement.  Then, we'll describe _advanced_ options, which give you more control over the system behavior, but require you to write more code.
+
+
+### Prerequisites
+
+1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
+2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
+3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
+
+### Tutorial Outline
+
+1. [Participant](./tutorial_participant.html)
+2. [Spectator](./tutorial_spectator.html)
+3. [Controller](./tutorial_controller.html)
+4. [Rebalancing Algorithms](./tutorial_rebalance.html)
+5. [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
+6. [State Machines](./tutorial_state.html)
+7. [Messaging](./tutorial_messaging.html)
+8. [Customized health check](./tutorial_health.html)
+9. [Throttling](./tutorial_throttling.html)
+10. [Application Property Store](./tutorial_propstore.html)
+11. [Logical Accessors](./tutorial_accessors.html)
+12. [Admin Interface](./tutorial_admin.html)
+13. [YAML Cluster Setup](./tutorial_yaml.html)
+
+### Preliminaries
+
+First, we need to set up the system.  Let\'s walk through the steps in building a distributed system using Helix. We will show how to do this using both the Java admin interface, as well as the [cluster accessor](./tutorial_accessors.html) interface. You can choose either interface depending on which most closely matches your needs.
+
+### Start Zookeeper
+
+This starts a zookeeper in standalone mode. For production deployment, see [Apache Zookeeper](http://zookeeper.apache.org) for instructions.
+
+```
+    ./start-standalone-zookeeper.sh 2199 &
+```
+
+### Create a cluster
+
+Creating a cluster will define the cluster in appropriate znodes on zookeeper.   
+
+Using the Java accessor API:
+
+```
+// Note: ZK_ADDRESS is the host:port of Zookeeper
+String ZK_ADDRESS = "localhost:2199";
+HelixConnection connection = new ZKHelixConnection(ZK_ADDRESS);
+
+ClusterId clusterId = ClusterId.from("helix-demo");
+ClusterAccessor clusterAccessor = connection.createClusterAccessor(clusterId);
+ClusterConfig clusterConfig = new ClusterConfig.Builder(clusterId).build();
+clusterAccessor.createCluster(clusterConfig);
+```
+
+OR
+
+Using the HelixAdmin Java interface:
+
+```
+// Create setup tool instance
+// Note: ZK_ADDRESS is the host:port of Zookeeper
+String ZK_ADDRESS = "localhost:2199";
+HelixAdmin admin = new ZKHelixAdmin(ZK_ADDRESS);
+
+String CLUSTER_NAME = "helix-demo";
+//Create cluster namespace in zookeeper
+admin.addCluster(CLUSTER_NAME);
+```
+
+OR
+
+Using the command-line interface:
+
+```
+    ./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo 
+```
+
+
+### Configure the nodes of the cluster
+
+First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable. 
+The most commonly used convention is hostname_port.
+
+```
+int NUM_NODES = 2;
+String hosts[] = new String[]{"localhost","localhost"};
+int ports[] = new int[]{7000,7001};
+for (int i = 0; i < NUM_NODES; i++)
+{
+  ParticipantId participantId = ParticipantId.from(hosts[i] + "_" + ports[i]);
+
+  // set additional configuration for the participant; these can be accessed during node start up
+  UserConfig userConfig = new UserConfig(Scope.participant(participantId));
+  userConfig.setSimpleField("key", "value");
+
+  // configure and add the participant
+  ParticipantConfig participantConfig = new ParticipantConfig.Builder(participantId)
+      .hostName(hosts[i]).port(ports[i]).enabled(true).userConfig(userConfig).build();
+  clusterAccessor.addParticipantToCluster(participantConfig);
+}
+```
+
+OR
+
+Using the HelixAdmin Java interface:
+
+```
+String CLUSTER_NAME = "helix-demo";
+int NUM_NODES = 2;
+String hosts[] = new String[]{"localhost","localhost"};
+String ports[] = new String[]{7000,7001};
+for (int i = 0; i < NUM_NODES; i++)
+{
+  InstanceConfig instanceConfig = new InstanceConfig(hosts[i] + "_" + ports[i]);
+  instanceConfig.setHostName(hosts[i]);
+  instanceConfig.setPort(ports[i]);
+  instanceConfig.setInstanceEnabled(true);
+
+  //Add additional system specific configuration if needed. These can be accessed during the node start up.
+  instanceConfig.getRecord().setSimpleField("key", "value");
+  admin.addInstance(CLUSTER_NAME, instanceConfig);
+}
+```
+
+### Configure the resource
+
+A _resource_ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
+A _resource_ can be divided into many sub-parts known as _partitions_.
+
+
+#### Define the _state model_ and _constraints_
+
+For scalability and fault tolerance, each partition can have one or more replicas. 
+The _state model_ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
+A simple model is ONLINE-OFFLINE where ONLINE means the task is active and OFFLINE means it\'s not active.
+You can also specify how many replicas must be in each state, these are known as _constraints_.
+For example, in a search system, one might need more than one node serving the same index to handle the load.
+
+The allowed states: 
+
+* MASTER
+* SLAVE
+* OFFLINE
+
+The allowed transitions: 
+
+* OFFLINE to SLAVE
+* SLAVE to OFFLINE
+* SLAVE to MASTER
+* MASTER to SLAVE
+
+The constraints:
+
+* no more than 1 MASTER per partition
+* the rest of the replicas should be slaves
+
+The following snippet shows how to declare the _state model_ and _constraints_ for the MASTER-SLAVE model.
+
+```
+StateModelDefinition.Builder builder = new StateModelDefinition.Builder(STATE_MODEL_NAME);
+
+// Add states and their rank to indicate priority. A lower rank corresponds to a higher priority
+builder.addState(MASTER, 1);
+builder.addState(SLAVE, 2);
+builder.addState(OFFLINE);
+
+// Set the initial state when the node starts
+builder.initialState(OFFLINE);
+
+// Add transitions between the states.
+builder.addTransition(OFFLINE, SLAVE);
+builder.addTransition(SLAVE, OFFLINE);
+builder.addTransition(SLAVE, MASTER);
+builder.addTransition(MASTER, SLAVE);
+
+// set constraints on states.
+
+// static constraint: upper bound of 1 MASTER
+builder.upperBound(MASTER, 1);
+
+// dynamic constraint: R means it should be derived based on the replication factor for the cluster
+// this allows a different replication factor for each resource without 
+// having to define a new state model
+//
+builder.dynamicUpperBound(SLAVE, "R");
+StateModelDefinition statemodelDefinition = builder.build();
+```
+
+Then, add the state model definition:
+
+```
+clusterAccessor.addStateModelDefinitionToCluster(stateModelDefinition);
+```
+
+OR
+
+```
+admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, stateModelDefinition);
+```
+
+#### Assigning partitions to nodes
+
+The final goal of Helix is to ensure that the constraints on the state model are satisfied. 
+Helix does this by assigning a STATE to a partition (such as MASTER, SLAVE), and placing it on a particular node.
+
+There are 3 assignment modes Helix can operate on
+
+* FULL_AUTO: Helix decides the placement and state of a partition.
+* SEMI_AUTO: Application decides the placement but Helix decides the state of a partition.
+* CUSTOMIZED: Application controls the placement and state of a partition.
+
+For more info on the assignment modes, see [Rebalancing Algorithms](./tutorial_rebalance.html) section of the tutorial.
+
+Here is an example of adding the resource in SEMI_AUTO mode (i.e. locations of partitions are specified a priori):
+
+```
+int NUM_PARTITIONS = 6;
+int NUM_REPLICAS = 2;
+ResourceId resourceId = resourceId.from("MyDB");
+
+SemiAutoRebalancerContext context = new SemiAutoRebalancerContext.Builder(resourceId)
+  .replicaCount(NUM_REPLICAS).addPartitions(NUM_PARTITIONS)
+  .stateModelDefId(stateModelDefinition.getStateModelDefId())
+  .addPreferenceList(partition1Id, preferenceList) // preferred locations of each partition
+  // add other preference lists per partition
+  .build();
+
+// or add all preference lists at once if desired (map of PartitionId to List of ParticipantId)
+context.setPreferenceLists(preferenceLists);
+
+// or generate a default set of preference lists given the set of all participants
+context.generateDefaultConfiguration(stateModelDefinition, participantIdSet);
+```
+
+OR
+
+```
+String RESOURCE_NAME = "MyDB";
+int NUM_PARTITIONS = 6;
+String MODE = "SEMI_AUTO";
+int NUM_REPLICAS = 2;
+
+admin.addResource(CLUSTER_NAME, RESOURCE_NAME, NUM_PARTITIONS, STATE_MODEL_NAME, MODE);
+
+// specify the preference lists yourself
+IdealState idealState = admin.getResourceIdealState(CLUSTER_NAME, RESOURCE_NAME);
+idealState.setPreferenceList(partitionId, preferenceList); // preferred locations of each partition
+// add other preference lists per partition
+
+// or add all preference lists at once if desired
+idealState.getRecord().setListFields(preferenceLists);
+admin.setResourceIdealState(CLUSTER_NAME, RESOURCE_NAME, idealState);
+
+// or generate a default set of preference lists 
+admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/UseCases.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/UseCases.md b/site-releases/0.7.0-incubating/src/site/markdown/UseCases.md
new file mode 100644
index 0000000..001b012
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/UseCases.md
@@ -0,0 +1,113 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Use Cases</title>
+</head>
+
+
+# Use cases at LinkedIn
+
+At LinkedIn Helix framework is used to manage 3 distributed data systems which are quite different from each other.
+
+* Espresso
+* Databus
+* Search As A Service
+
+## Espresso
+
+Espresso is a distributed, timeline consistent, scal- able, document store that supports local secondary indexing and local transactions. 
+Espresso databases are horizontally partitioned into a number of partitions, with each partition having a certain number of replicas 
+distributed across the storage nodes.
+Espresso designates one replica of each partition as master and the rest as slaves; only one master may exist for each partition at any time.
+Espresso enforces timeline consistency where only the master of a partition can accept writes to its records, and all slaves receive and 
+apply the same writes through a replication stream. 
+For load balancing, both master and slave partitions are assigned evenly across all storage nodes. 
+For fault tolerance, it adds the constraint that no two replicas of the same partition may be located on the same node.
+
+### State model
+Espresso follows a Master-Slave state model. A replica can be in Offline,Slave or Master state. 
+The state machine table describes the next state given the Current State, Final State
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+### Constraints
+* Max number of replicas in Master state:1
+* Execution mode AUTO. i.e on node failure no new replicas will be created. Only the State of remaining replicas will be changed.
+* Number of mastered partitions on each node must be approximately same.
+* The above constraint must be satisfied when a node fails or a new node is added.
+* When new nodes are added the number of partitions moved must be minimized.
+* When new nodes are added the max number of OFFLINE-SLAVE transitions that can happen concurrently on new node is X.
+
+## Databus
+
+Databus is a change data capture (CDC) system that provides a common pipeline for transporting events 
+from LinkedIn primary databases to caches within various applications.
+Databus deploys a cluster of relays that pull the change log from multiple databases and 
+let consumers subscribe to the change log stream. Each Databus relay connects to one or more database servers and 
+hosts a certain subset of databases (and partitions) from those database servers. 
+
+For a large partitioned database (e.g. Espresso), the change log is consumed by a bank of consumers. 
+Each databus partition is assigned to a consumer such that partitions are evenly distributed across consumers and each partition is
+assigned to exactly one consumer at a time. The set of consumers may grow over time, and consumers may leave the group due to planned or unplanned 
+outages. In these cases, partitions must be reassigned, while maintaining balance and the single consumer-per-partition invariant.
+
+### State model
+Databus consumers follow a simple Offline-Online state model.
+The state machine table describes the next state given the Current State, Final State
+
+<pre><code>
+          OFFLINE  | ONLINE |   
+         ___________________|
+        |          |        |
+OFFLINE |   N/A    | ONLINE |
+        |__________|________|
+        |          |        |
+ONLINE  |  OFFLINE |   N/A  |
+        |__________|________|
+
+
+</code></pre>
+
+
+## Search As A Service
+
+LinkedIn�s Search-as-a-service lets internal customers define custom indexes on a chosen dataset 
+and then makes those indexes searchable via a service API. The index service runs on a cluster of machines. 
+The index is broken into partitions and each partition has a configured number of replicas.
+Each cluster server runs an instance of the Sensei system (an online index store) and hosts index partitions. 
+Each new indexing service gets assigned to a set of servers, and the partition replicas must be evenly distributed across those servers.
+
+### State model
+![Helix Design](images/bootstrap_statemodel.gif) 
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/index.md b/site-releases/0.7.0-incubating/src/site/markdown/index.md
new file mode 100644
index 0000000..f983273
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/index.md
@@ -0,0 +1,60 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Home</title>
+</head>
+
+Navigating the Documentation
+----------------------------
+
+### Conceptual Understanding
+
+[Concepts / Terminology](./Concepts.html)
+
+[Architecture](./Architecture.html)
+
+### Hands-on Helix
+
+[Getting Helix](./Building.html)
+
+[Quickstart](./Quickstart.html)
+
+[Tutorial](./Tutorial.html)
+
+[Javadocs](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/index.html)
+
+### Recipes
+
+[Distributed lock manager](./recipes/lock_manager.html)
+
+[Rabbit MQ consumer group](./recipes/rabbitmq_consumer_group.html)
+
+[Rsync replicated file store](./recipes/rsync_replicated_file_store.html)
+
+[Service discovery](./recipes/service_discovery.html)
+
+[Distributed Task DAG Execution](./recipes/task_dag_execution.html)
+
+[User-Defined Rebalancer Example](./recipes/user_def_rebalancer.html)
+
+### Download
+
+[0.7.0-incubating](./download.html)
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/recipes/lock_manager.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/recipes/lock_manager.md b/site-releases/0.7.0-incubating/src/site/markdown/recipes/lock_manager.md
new file mode 100644
index 0000000..252ace7
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/recipes/lock_manager.md
@@ -0,0 +1,253 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Distributed lock manager
+------------------------
+Distributed locks are used to synchronize accesses shared resources. Most applications use Zookeeper to model the distributed locks. 
+
+The simplest way to model a lock using zookeeper is (See Zookeeper leader recipe for an exact and more advanced solution)
+
+* Each process tries to create an emphemeral node.
+* If can successfully create it then, it acquires the lock
+* Else it will watch on the znode and try to acquire the lock again if the current lock holder disappears 
+
+This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in
+
+* Uneven distribution of locks among nodes, the node that starts first will acquire all the lock. Nodes that start later will be idle.
+* When a node fails, how the locks will be distributed among remaining nodes is not predicable. 
+* When new nodes are added the current nodes dont relinquish the locks so that new nodes can acquire some locks
+
+In other words we want a system to satisfy the following requirements.
+
+* Distribute locks evenly among all nodes to get better hardware utilization
+* If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
+* If nodes are added, locks must be evenly re-distributed among nodes.
+
+Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied. 
+
+To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
+
+----------------------------------------------------------------------------------------
+
+#### Short version
+ This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
+ 
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin
+chmod +x *
+./lock-manager-demo
+```
+
+##### Output
+
+```
+./lock-manager-demo 
+STARTING localhost_12000
+STARTING localhost_12002
+STARTING localhost_12001
+STARTED localhost_12000
+STARTED localhost_12002
+STARTED localhost_12001
+localhost_12001 acquired lock:lock-group_3
+localhost_12000 acquired lock:lock-group_8
+localhost_12001 acquired lock:lock-group_2
+localhost_12001 acquired lock:lock-group_4
+localhost_12002 acquired lock:lock-group_1
+localhost_12002 acquired lock:lock-group_10
+localhost_12000 acquired lock:lock-group_7
+localhost_12001 acquired lock:lock-group_5
+localhost_12002 acquired lock:lock-group_11
+localhost_12000 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_0
+localhost_12000 acquired lock:lock-group_9
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12000
+lock-group_7    localhost_12000
+lock-group_8    localhost_12000
+lock-group_9    localhost_12000
+Stopping localhost_12000
+localhost_12000 Interrupted
+localhost_12001 acquired lock:lock-group_9
+localhost_12001 acquired lock:lock-group_8
+localhost_12002 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_7
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12002
+lock-group_7    localhost_12002
+lock-group_8    localhost_12001
+lock-group_9    localhost_12001
+
+```
+
+----------------------------------------------------------------------------------------
+
+#### Long version
+This provides more details on how to setup the cluster and where to plugin application code.
+
+##### start zookeeper
+
+```
+./start-standalone-zookeeper 2199
+```
+
+##### Create a cluster
+
+```
+./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
+```
+
+##### Create a lock group
+
+Create a lock group and specify the number of locks in the lock group. 
+
+```
+./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline FULL_AUTO
+```
+
+##### Start the nodes
+
+Create a Lock class that handles the callbacks. 
+
+```
+
+public class Lock extends StateModel
+{
+  private String lockName;
+
+  public Lock(String lockName)
+  {
+    this.lockName = lockName;
+  }
+
+  public void lock(Message m, NotificationContext context)
+  {
+    System.out.println(" acquired lock:"+ lockName );
+  }
+
+  public void release(Message m, NotificationContext context)
+  {
+    System.out.println(" releasing lock:"+ lockName );
+  }
+
+}
+
+```
+
+LockFactory that creates the lock
+ 
+```
+public class LockFactory extends StateModelFactory<Lock>{
+    
+    /* Instantiates the lock handler, one per lockName*/
+    public Lock create(String lockName)
+    {
+        return new Lock(lockName);
+    }   
+}
+```
+
+At node start up, simply join the cluster and helix will invoke the appropriate callbacks on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
+
+```
+public class LockProcess{
+
+  public static void main(String args){
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    //Give a unique id to each process, most commonly used format hostname_port
+    String instanceName ="localhost_12000";
+    ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
+    //configure the instance and provide some metadata 
+    InstanceConfig config = new InstanceConfig(instanceName);
+    config.setHostName("localhost");
+    config.setPort("12000");
+    admin.addInstance(clusterName, config);
+    //join the cluster
+    HelixManager manager;
+    manager = HelixManagerFactory.getHelixManager(clusterName,
+                                                  instanceName,
+                                                  InstanceType.PARTICIPANT,
+                                                  zkAddress);
+    manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
+    manager.connect();
+    Thread.currentThread.join();
+    }
+
+}
+```
+
+##### Start the controller
+
+Controller can be started either as a separate process or can be embedded within each node process
+
+###### Separate process
+This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes.
+
+```
+./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
+```
+
+###### Embedded within the node process
+This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass
+
+```
+public class LockProcess{
+
+  public static void main(String args){
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    .
+    .
+    manager.connect();
+    HelixManager controller;
+    controller = HelixControllerMain.startHelixController(zkAddress, 
+                                                          clusterName,
+                                                          "controller", 
+                                                          HelixControllerMain.STANDALONE);
+    Thread.currentThread.join();
+  }
+}
+```
+
+----------------------------------------------------------------------------------------
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md b/site-releases/0.7.0-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
new file mode 100644
index 0000000..9edc2cb
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
@@ -0,0 +1,227 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+RabbitMQ Consumer Group
+=======================
+
+[RabbitMQ](http://www.rabbitmq.com/) is a well known Open source software the provides robust messaging for applications.
+
+One of the commonly implemented recipes using this software is a work queue.  http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where
+
+* A producer sends a message with a routing key. 
+* The message is routed to the queue whose binding key exactly matches the routing key of the message.	
+* There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
+
+The example provided [here](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes how multiple consumers can be started to process all the messages.
+
+While this works, in production systems one needs the following 
+
+* Ability to handle failures: when a consumers fails another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer.
+* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers. 
+
+In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.
+
+Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition. 
+
+Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. 
+What we want is all 6 queues to be evenly divided among 2 consumers. 
+Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues.
+Now let's say that a consumer failed which reduces the number of active consumers to 2. This means each consumer must process 3 queues.
+
+We showcase how such a dynamic App can be developed using Helix. Even though we use rabbitmq as the pub/sub system one can extend this solution to other pub/sub systems.
+
+Try it
+======
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/rabbitmq-consumer-group/bin
+chmod +x *
+export HELIX_PKG_ROOT=`pwd`/helix-core/target/helix-core-pkg
+export HELIX_RABBITMQ_ROOT=`pwd`/recipes/rabbitmq-consumer-group/
+chmod +x $HELIX_PKG_ROOT/bin/*
+chmod +x $HELIX_RABBITMQ_ROOT/bin/*
+```
+
+
+Install Rabbit MQ
+----------------
+
+Setting up RabbitMQ on a local box is straightforward. You can find the instructions here
+http://www.rabbitmq.com/download.html
+
+Start ZK
+--------
+Start zookeeper at port 2199
+
+```
+$HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
+```
+
+Setup the consumer group cluster
+--------------------------------
+This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues. 
+
+```
+$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199 
+```
+
+Add consumers
+-------------
+Start 2 consumers in 2 different terminals. Each consumer is given a unique id.
+
+```
+//start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost 
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost 
+
+```
+
+Start HelixController
+--------------------
+Now start a Helix controller that starts managing the "rabbitmq-consumer-group" cluster.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
+```
+
+Send messages to the Topic
+--------------------------
+
+Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. 
+Based on the key, messages gets routed to the appropriate queue.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
+```
+
+After running this, you should see all 20 messages being processed by 2 consumers. 
+
+Add another consumer
+--------------------
+Once a new consumer is started, helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
+each consumer is now processing only 2 queues.
+Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
+```
+
+Send messages again to the topic.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
+```
+
+You should see that messages are now received by all 3 consumers.
+
+Stop a consumer
+---------------
+In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.
+
+
+How does it work
+================
+
+Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq). 
+ 
+Cluster setup
+-------------
+This step creates znode on zookeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
+
+It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to FULL_AUTO. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
+
+```
+      zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
+          ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
+      ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
+      
+      // add cluster
+      admin.addCluster(clusterName, true);
+
+      // add state model definition
+      StateModelConfigGenerator generator = new StateModelConfigGenerator();
+      admin.addStateModelDef(clusterName, "OnlineOffline",
+          new StateModelDefinition(generator.generateConfigForOnlineOffline()));
+
+      // add resource "topic" which has 6 partitions
+      String resourceName = "rabbitmq-consumer-group";
+      admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "FULL_AUTO");
+```
+
+Starting the consumers
+----------------------
+The only thing consumers need to know is the zkaddress, cluster name and consumer id. It does not need to know anything else.
+
+```
+   _manager =
+          HelixManagerFactory.getZKHelixManager(_clusterName,
+                                                _consumerId,
+                                                InstanceType.PARTICIPANT,
+                                                _zkAddr);
+
+      StateMachineEngine stateMach = _manager.getStateMachineEngine();
+      ConsumerStateModelFactory modelFactory =
+          new ConsumerStateModelFactory(_consumerId, _mqServer);
+      stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
+
+      _manager.connect();
+
+```
+
+Once the consumer has registered the statemodel and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. 
+As a part of this transition, the consumer will stop consuming from a that queue.
+
+```
+ @Transition(to = "ONLINE", from = "OFFLINE")
+  public void onBecomeOnlineFromOffline(Message message, NotificationContext context)
+  {
+    LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
+
+    if (_thread == null)
+    {
+      LOG.debug("Starting ConsumerThread for " + _partition + "...");
+      _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
+      _thread.start();
+      LOG.debug("Starting ConsumerThread for " + _partition + " done");
+
+    }
+  }
+
+  @Transition(to = "OFFLINE", from = "ONLINE")
+  public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
+      throws InterruptedException
+  {
+    LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
+
+    if (_thread != null)
+    {
+      LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+
+      _thread.interrupt();
+      _thread.join(2000);
+      _thread = null;
+      LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+
+    }
+  }
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md b/site-releases/0.7.0-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
new file mode 100644
index 0000000..f8a74a0
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
@@ -0,0 +1,165 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Near real time rsync replicated file system
+===========================================
+
+Quickdemo
+---------
+
+* This demo starts 3 instances with id's as ```localhost_12001, localhost_12002, localhost_12003```
+* Each instance stores its files under ```/tmp/<id>/filestore```
+* ``` localhost_12001 ``` is designated as the master and ``` localhost_12002 and localhost_12003``` are the slaves.
+* Files written to master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and it gets replicated to other folders.
+* When the master is stopped, ```localhost_12002``` is promoted to master. 
+* The other slave ```localhost_12003``` stops replicating from ```localhost_12001``` and starts replicating from new master ```localhost_12002```
+* Files written to new master ```localhost_12002``` are replicated to ```localhost_12003```
+* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appears in ```/tmp/localhost_12003/filestore```
+* Ignore the interrupted exceptions on the console :-).
+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd recipes/rsync-replicated-file-system/
+mvn clean install package -DskipTests
+cd target/rsync-replicated-file-system-pkg/bin
+chmod +x *
+./quickdemo
+
+```
+
+Overview
+--------
+
+There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these usecases is also that files are typically only added or deleted, rarely updated. When there are updates, they are rare and do not have any concurrency requirements.
+
+These are much simpler requirements than what general purpose distributed file system have to satisfy including concurrent access to files, random access for reads and updates, posix compliance etc. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
+ 
+A different implementation of a distributed file system includes HDFS which is inspired by Google's GFS. This is one of the most widely used distributed file system that forms the main data storage platform for Hadoop. HDFS is primary aimed at processing very large data sets and distributes files across a cluster of commodity servers by splitting up files in fixed size chunks. HDFS is not particularly well suited for storing a very large number of relatively tiny files.
+
+### File Store
+
+It's possible to build a vastly simpler system for the class of applications that have simpler requirements as we have pointed out.
+
+* Large number of files but each file is relatively small.
+* Access is limited to create, delete and get entire files.
+* No updates to files that are already created (or it's feasible to delete the old file and create a new one).
+ 
+
+We call this system a Partitioned File Store (PFS) to distinguish it from other distributed file systems. This system needs to provide the following features:
+
+* CRD access to large number of small files
+* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement.
+* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability.
+* Elasticity: It should be possible to add capacity to the cluster easily.
+ 
+
+Apache Helix is a generic cluster management framework that makes it very easy to provide the scalability, fault-tolerance and elasticity features. 
+Rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
+
+Design
+------
+
+High level 
+
+* Partition the file system based on the file name. 
+* At any time a single writer can write, we call this a master.
+* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads.
+* Slave replicates data from the master.
+* When a master fails, slave gets promoted to master.
+
+### Transaction log
+
+Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order. 
+To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit id in which the 32 LSB represents a sequence number and MSB represents the generation number.
+Sequence gets incremented on every transaction and and generation is increment when a new master is elected. 
+
+### Replication
+
+Replication is required to slave to keep up with the changes on the master. Every time the slave applies a change it checkpoints the last applied transaction id. 
+During restarts, this allows the slave to pull changes from the last checkpointed id. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction id, it uses the same id generated by the master.
+
+
+### Fail over
+
+When a master fails, a new slave will be promoted to master. If the prev master node is reachable, then the new master will flush all the 
+changes from previous master before taking up mastership. The new master will record the end transaction id of the current generation and then starts new generation 
+with sequence starting from 1. After this the master will begin accepting writes. 
+
+
+![Partitioned File Store](../images/PFS-Generic.png)
+
+
+
+Rsync based solution
+-------------------
+
+![Rsync based File Store](../images/RSYNC_BASED_PFS.png)
+
+
+This application demonstrate a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, 
+can implement a custom solution to notify the slave of the changes and also provide an api to pull the change files.
+#### Concept
+* file_store_dir: Root directory for the actual data files 
+* change_log_dir: The transaction logs are generated under this folder.
+* check_point_dir: The slave stores the check points ( last processed transaction) here.
+
+#### Master
+* File server: This component support file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. Idea is that most applications have different ways of implementing this component and has some business logic associated with it. It is not hard to come up with such a component if needed.
+* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes.
+* Change Log Generator: This registers as a listener of File System Watcher and on each notification logs the changes into a file under ```change_log_dir```. 
+
+####Slave
+* File server: This component on the slave will only support reads.
+* Cluster state observer: Slave observes the cluster state and is able to know who is the current master. 
+* Replicator: This has two subcomponents
+    - Periodic rsync of change log: This is a background process that periodically rsyncs the ```change_log_dir``` of the master to its local directory
+    - Change Log Watcher: This watches the ```change_log_dir``` for changes and notifies the registered listeners of the change
+    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file.
+
+
+#### Coordination
+
+The coordination between nodes is done by Helix. Helix does the partition management and assigns the partition to multiple nodes based on the replication factor. It elects one the nodes as master and designates others as slaves.
+It provides notifications to each node in the form of state transitions ( Offline to Slave, Slave to Master). It also provides notification when there is change is cluster state. 
+This allows the slave to stop replicating from current master and start replicating from new master. 
+
+In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically 
+re-distribute partitions among the nodes. To summarize, Helix provides partition management, fault tolerance and facilitates automated cluster expansion.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/recipes/service_discovery.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/recipes/service_discovery.md b/site-releases/0.7.0-incubating/src/site/markdown/recipes/service_discovery.md
new file mode 100644
index 0000000..8e06ead
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/recipes/service_discovery.md
@@ -0,0 +1,191 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Service Discovery
+-----------------
+
+One of the common usage of zookeeper is enable service discovery. 
+The basic idea is that when a server starts up it advertises its configuration/metadata such as host name port etc on zookeeper. 
+This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and 
+is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to vips.
+
+The core idea behind this is to use zookeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a znode. 
+When the server shutdowns, zookeeper automatically removes this znode. 
+
+There are two ways the clients can dynamically discover the active servers
+
+#### ZOOKEEPER WATCH
+
+Clients can set a child watch under specific path on zookeeper. 
+When a new service is registered/deregistered, zookeeper notifies the client via watchevent and the client can read the list of services. Even though this looks trivial, 
+there are lot of things one needs to keep in mind like ensuring that you first set the watch back on zookeeper before reading data from zookeeper.
+
+
+#### POLL
+
+Another approach is for the client to periodically read the zookeeper path and get the list of services.
+
+
+Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is worst especially when servers are starting up. 
+But good thing about setting watch is that clients are immediately notified of a change which is not true in case of polling. 
+In some cases, having both WATCH and POLL makes sense, WATCH allows one to get notifications as soon as possible while POLL provides a safety net if a watch event is missed because of code bug or zookeeper fails to notify.
+
+##### Other important scenarios to take care of
+* What happens when zookeeper session expires. All the watches/ephemeral nodes previously added/created by this server are lost. 
+One needs to add the watches again , recreate the ephemeral nodes etc.
+* Due to network issues or java GC pauses session expiry might happen again and again also known as flapping. Its important for the server to detect this and deregister itself.
+
+##### Other operational things to consider
+* What if the node is behaving badly, one might kill the server but will lose the ability to debug. 
+It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
+ 
+#### Configuration ownership
+
+This is an important aspect that is often ignored in the initial stages of your development. In common, service discovery pattern means that servers start up with some configuration and then simply puts its configuration/metadata in zookeeper. While this works well in the beginning, 
+configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting of the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server. 
+
+Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from zookeeper.
+
+h3. How to use Helix to achieve this
+
+Even though Helix has higher level abstraction in terms of statemachine, constraints and objectives, 
+service discovery is one of things that existed since we started. 
+The controller uses the exact mechanism we described above to discover when new servers join the cluster.
+We create these znodes under /CLUSTERNAME/LIVEINSTANCES. 
+Since at any time there is only one controller, we use ZK watch to track the liveness of a server.
+
+This recipe, simply demonstrate how one can re-use that part for implementing service discovery. This demonstrates multiple MODE's of service discovery
+
+* POLL: The client reads from zookeeper at regular intervals 30 seconds. Use this if you have 100's of clients
+* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients.
+* NONE: This does neither of the above, but reads directly from zookeeper when ever needed.
+
+Helix provides these additional features compared to other implementations available else where
+
+* It has the concept of disabling a node which means that a badly behaving node, can be disabled using helix admin api.
+* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node.
+* Configuration management  
+    * Allows one to set configuration via admin api at various granulaties like cluster, instance, resource, partition 
+    * Configuration can be dynamically changed.
+    * Notifies the server when configuration changes.
+
+
+##### checkout and build
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/service-discovery/target/service-discovery-pkg/bin
+chmod +x *
+```
+
+##### start zookeeper
+
+```
+./start-standalone-zookeeper 2199
+```
+
+#### Run the demo
+
+```
+./service-discovery-demo.sh
+```
+
+#### Output
+
+```
+START:Service discovery demo mode:WATCH
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12002
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12002
+END:Service discovery demo mode:WATCH
+=============================================
+START:Service discovery demo mode:POLL
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12002
+	Sleeping for poll interval:30000
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12002
+END:Service discovery demo mode:POLL
+=============================================
+START:Service discovery demo mode:NONE
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12000
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12000
+END:Service discovery demo mode:NONE
+=============================================
+
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/recipes/task_dag_execution.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/recipes/task_dag_execution.md b/site-releases/0.7.0-incubating/src/site/markdown/recipes/task_dag_execution.md
new file mode 100644
index 0000000..f0474e4
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/recipes/task_dag_execution.md
@@ -0,0 +1,204 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Distributed task execution
+
+
+This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with desired parallelism and will start only when up-stream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of 
+re-assigning a failed task partition to a new worker. 
+
+Redis is used as a result store. Any other suitable implementation for TaskResultStore can be plugged in.
+
+### Workflow 
+
+
+#### Input 
+
+10000 impression events and around 100 click events are pre-populated in task result store (redis). 
+
+* **ImpEvent**: format: id,isFraudulent,country,gender
+
+* **ClickEvent**: format: id,isFraudulent,impEventId
+
+#### Stages
+
++ **FilterImps**: Filters impression where isFraudulent=true.
+
++ **FilterClicks**: Filters clicks where isFraudulent=true
+
++ **impCountsByGender**: Generates impression counts grouped by gender. It does this by incrementing the count for 'impression_gender_counts:<gender_value>' in the task result store (redis hash). Depends on: **FilterImps**
+
++ **impCountsByCountry**: Generates impression counts grouped by country. It does this by incrementing the count for 'impression_country_counts:<country_value>' in the task result store (redis hash). Depends on: **FilterClicks**
+
++ **impClickJoin**: Joins clicks with corresponding impression event using impEventId as the join key. Join is needed to pull dimensions not present in click event. Depends on: **FilterImps, FilterClicks**
+
++ **clickCountsByGender**: Generates click counts grouped by gender. It does this by incrementing the count for click_gender_counts:<gender_value> in the task result store (redis hash). Depends on: **impClickJoin**
+
++ **clickCountsByGender**: Generates click counts grouped by country. It does this by incrementing the count for click_country_counts:<country_value> in the task result store (redis hash). Depends on: **impClickJoin**
+
++ **report**: Reads from all aggregates generated by previous stages and prints them. Depends on: **impCountsByGender, impCountsByCountry, clickCountsByGender,clickCountsByGender**
+
+
+### Creating DAG
+
+Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modelled as a resource in Helix using OnlineOffline state model. As part of Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to online state. See Task.java for additional info.
+
+```
+
+  Dag dag = new Dag();
+  dag.addNode(new Node("filterImps", 10, ""));
+  dag.addNode(new Node("filterClicks", 5, ""));
+  dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
+  dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
+  dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
+  dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
+  dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));		
+  dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
+
+
+```
+
+### DEMO
+
+In order to run the demo, use the following steps
+
+See http://redis.io/topics/quickstart on how to install redis server
+
+```
+
+Start redis e.g:
+./redis-server --port 6379
+
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd recipes/task-execution
+mvn clean install package -DskipTests
+cd target/task-execution-pkg/bin
+chmod +x task-execution-demo.sh
+./task-execution-demo.sh 2181 localhost 6379 
+
+```
+
+```
+
+
+
+
+
+                       +-----------------+       +----------------+
+                       |   filterImps    |       |  filterClicks  |
+                       | (parallelism=10)|       | (parallelism=5)|
+                       +----------+-----++       +-------+--------+
+                       |          |     |                |
+                       |          |     |                |
+                       |          |     |                |
+                       |          |     +------->--------v------------+
+      +--------------<-+   +------v-------+    |  impClickJoin        |
+      |impCountsByGender   |impCountsByCountry | (parallelism=10)     |
+      |(parallelism=10)    |(parallelism=10)   ++-------------------+-+
+      +-----------+--+     +---+----------+     |                   |
+                  |            |                |                   |
+                  |            |                |                   |
+                  |            |       +--------v---------+       +-v-------------------+
+                  |            |       |clickCountsByGender       |clickCountsByCountry |
+                  |            |       |(parallelism=5)   |       |(parallelism=5)      |
+                  |            |       +----+-------------+       +---------------------+
+                  |            |            |                     |
+                  |            |            |                     |
+                  |            |            |                     |
+                  +----->+-----+>-----------v----+<---------------+
+                         | report                |
+                         |(parallelism=1)        |
+                         +-----------------------+
+
+```
+
+(credit for above ascii art: http://www.asciiflow.com)
+
+### OUTPUT
+
+```
+Done populating dummy data
+Executing filter task for filterImps_3 for impressions_demo
+Executing filter task for filterImps_2 for impressions_demo
+Executing filter task for filterImps_0 for impressions_demo
+Executing filter task for filterImps_1 for impressions_demo
+Executing filter task for filterImps_4 for impressions_demo
+Executing filter task for filterClicks_3 for clicks_demo
+Executing filter task for filterClicks_1 for clicks_demo
+Executing filter task for filterImps_8 for impressions_demo
+Executing filter task for filterImps_6 for impressions_demo
+Executing filter task for filterClicks_2 for clicks_demo
+Executing filter task for filterClicks_0 for clicks_demo
+Executing filter task for filterImps_7 for impressions_demo
+Executing filter task for filterImps_5 for impressions_demo
+Executing filter task for filterClicks_4 for clicks_demo
+Executing filter task for filterImps_9 for impressions_demo
+Running AggTask for impCountsByGender_3 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_2 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_0 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_9 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_1 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_4 for filtered_impressions_demo gender
+Running AggTask for impCountsByCountry_4 for filtered_impressions_demo country
+Running AggTask for impCountsByGender_5 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_2
+Running AggTask for impCountsByCountry_3 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_1 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_0 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_2 for filtered_impressions_demo country
+Running AggTask for impCountsByGender_6 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_1
+Executing JoinTask for impClickJoin_0
+Executing JoinTask for impClickJoin_3
+Running AggTask for impCountsByGender_8 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_4
+Running AggTask for impCountsByGender_7 for filtered_impressions_demo gender
+Running AggTask for impCountsByCountry_5 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_6 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_9
+Running AggTask for impCountsByCountry_8 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_7 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_5
+Executing JoinTask for impClickJoin_6
+Running AggTask for impCountsByCountry_9 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_8
+Executing JoinTask for impClickJoin_7
+Running AggTask for clickCountsByCountry_1 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_0 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_2 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_3 for joined_clicks_demo country
+Running AggTask for clickCountsByGender_1 for joined_clicks_demo gender
+Running AggTask for clickCountsByCountry_4 for joined_clicks_demo country
+Running AggTask for clickCountsByGender_3 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_2 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_4 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_0 for joined_clicks_demo gender
+Running reports task
+Impression counts per country
+{CANADA=1940, US=1958, CHINA=2014, UNKNOWN=2022, UK=1946}
+Click counts per country
+{US=24, CANADA=14, CHINA=26, UNKNOWN=14, UK=22}
+Impression counts per gender
+{F=3325, UNKNOWN=3259, M=3296}
+Click counts per gender
+{F=33, UNKNOWN=32, M=35}
+
+
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/recipes/user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/recipes/user_def_rebalancer.md b/site-releases/0.7.0-incubating/src/site/markdown/recipes/user_def_rebalancer.md
new file mode 100644
index 0000000..68fd954
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/recipes/user_def_rebalancer.md
@@ -0,0 +1,285 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Lock Manager with a User-Defined Rebalancer
+-------------------------------------------
+Helix is able to compute node preferences and state assignments automatically using general-purpose algorithms. In many cases, a distributed system implementer may choose to instead define a customized approach to computing the location of replicas, the state mapping, or both in response to the addition or removal of participants. The following is an implementation of the [Distributed Lock Manager](./lock_manager.html) that includes a user-defined rebalancer.
+
+### Define the cluster and locks
+
+The YAML file below fully defines the cluster and the locks. A lock can be in one of two states: locked and unlocked. Transitions can happen in either direction, and the locked is preferred. A resource in this example is the entire collection of locks to distribute. A partition is mapped to a lock; in this case that means there are 12 locks. These 12 locks will be distributed across 3 nodes. The constraints indicate that only one replica of a lock can be in the locked state at any given time. These locks can each only have a single holder, defined by a replica count of 1.
+
+Notice the rebalancer section of the definition. The mode is set to USER_DEFINED and the class name refers to the plugged-in rebalancer implementation that inherits from [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). This implementation is called whenever the state of the cluster changes, as is the case when participants are added or removed from the system.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/resources/lock-manager-config.yaml
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster
+resources:
+  - name: lock-group # unique resource name
+    rebalancer: # we will provide our own rebalancer
+      mode: USER_DEFINED
+      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer
+    partitions:
+      count: 12 # number of locks
+      replicas: 1 # number of simultaneous holders for each lock
+    stateModel:
+      name: lock-unlock # unique model name
+      states: [LOCKED, RELEASED, DROPPED] # the list of possible states
+      transitions: # the list of possible transitions
+        - name: Unlock
+          from: LOCKED
+          to: RELEASED
+        - name: Lock
+          from: RELEASED
+          to: LOCKED
+        - name: DropLock
+          from: LOCKED
+          to: DROPPED
+        - name: DropUnlock
+          from: RELEASED
+          to: DROPPED
+        - name: Undrop
+          from: DROPPED
+          to: RELEASED
+      initialState: RELEASED
+    constraints:
+      state:
+        counts: # maximum number of replicas of a partition that can be in each state
+          - name: LOCKED
+            count: "1"
+          - name: RELEASED
+            count: "-1"
+          - name: DROPPED
+            count: "-1"
+        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority
+      transition: # transitions priority to enforce order that transitions occur
+        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock]
+participants: # list of nodes that can acquire locks
+  - name: localhost_12001
+    host: localhost
+    port: 12001
+  - name: localhost_12002
+    host: localhost
+    port: 12002
+  - name: localhost_12003
+    host: localhost
+    port: 12003
+```
+
+Then, Helix\'s YAMLClusterSetup tool can read in the configuration and bootstrap the cluster immediately:
+
+```
+YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
+InputStream input =
+    Thread.currentThread().getContextClassLoader()
+        .getResourceAsStream("lock-manager-config.yaml");
+YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
+```
+
+### Write a rebalancer
+Below is a full implementation of a rebalancer that extends [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). In this case, it simply throws out the previous resource assignment, computes the target node for as many partition replicas as can hold a lock in the LOCKED state (in this example, one), and assigns them the LOCKED state (which is at the head of the state preference list). Clearly a more robust implementation would likely examine the current ideal state to maintain current assignments, and the full state list to handle models more complicated than this one. However, for a simple lock holder implementation, this is sufficient.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockManagerRebalancer.java
+
+```
+@Override
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
+    ResourceCurrentState currentState) {
+  // Get the rebalcancer context (a basic partitioned one)
+  PartitionedRebalancerContext context = rebalancerConfig.getRebalancerContext(
+      PartitionedRebalancerContext.class);
+
+  // Initialize an empty mapping of locks to participants
+  ResourceAssignment assignment = new ResourceAssignment(context.getResourceId());
+
+  // Get the list of live participants in the cluster
+  List<ParticipantId> liveParticipants = new ArrayList<ParticipantId>(
+      cluster.getLiveParticipantMap().keySet());
+
+  // Get the state model (should be a simple lock/unlock model) and the highest-priority state
+  StateModelDefId stateModelDefId = context.getStateModelDefId();
+  StateModelDefinition stateModelDef = cluster.getStateModelMap().get(stateModelDefId);
+  if (stateModelDef.getStatesPriorityList().size() < 1) {
+    LOG.error("Invalid state model definition. There should be at least one state.");
+    return assignment;
+  }
+  State lockState = stateModelDef.getTypedStatesPriorityList().get(0);
+
+  // Count the number of participants allowed to lock each lock
+  String stateCount = stateModelDef.getNumParticipantsPerState(lockState);
+  int lockHolders = 0;
+  try {
+    // a numeric value is a custom-specified number of participants allowed to lock the lock
+    lockHolders = Integer.parseInt(stateCount);
+  } catch (NumberFormatException e) {
+    LOG.error("Invalid state model definition. The lock state does not have a valid count");
+    return assignment;
+  }
+
+  // Fairly assign the lock state to the participants using a simple mod-based sequential
+  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
+  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
+  // number of participants as necessary.
+  // This assumes a simple lock-unlock model where the only state of interest is which nodes have
+  // acquired each lock.
+  int i = 0;
+  for (PartitionId partition : context.getPartitionSet()) {
+    Map<ParticipantId, State> replicaMap = new HashMap<ParticipantId, State>();
+    for (int j = i; j < i + lockHolders; j++) {
+      int participantIndex = j % liveParticipants.size();
+      ParticipantId participant = liveParticipants.get(participantIndex);
+      // enforce that a participant can only have one instance of a given lock
+      if (!replicaMap.containsKey(participant)) {
+        replicaMap.put(participant, lockState);
+      }
+    }
+    assignment.addReplicaMap(partition, replicaMap);
+    i++;
+  }
+  return assignment;
+}
+```
+
+### Start up the participants
+Here is a lock class based on the newly defined lock-unlock state model so that the participant can receive callbacks on state transitions.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/Lock.java
+
+```
+public class Lock extends StateModel {
+  private String lockName;
+
+  public Lock(String lockName) {
+    this.lockName = lockName;
+  }
+
+  @Transition(from = "RELEASED", to = "LOCKED")
+  public void lock(Message m, NotificationContext context) {
+    System.out.println(context.getManager().getInstanceName() + " acquired lock:" + lockName);
+  }
+
+  @Transition(from = "LOCKED", to = "RELEASED")
+  public void release(Message m, NotificationContext context) {
+    System.out.println(context.getManager().getInstanceName() + " releasing lock:" + lockName);
+  }
+}
+```
+
+Here is the factory to make the Lock class accessible.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockFactory.java
+
+```
+public class LockFactory extends StateModelFactory<Lock> {
+  @Override
+  public Lock createNewStateModel(String lockName) {
+    return new Lock(lockName);
+  }
+}
+```
+
+Finally, here is the factory registration and the start of the participant:
+
+```
+participantManager =
+    HelixManagerFactory.getZKHelixManager(clusterName, participantName, InstanceType.PARTICIPANT,
+        zkAddress);
+participantManager.getStateMachineEngine().registerStateModelFactory(stateModelName,
+    new LockFactory());
+participantManager.connect();
+```
+
+### Start up the controller
+
+```
+controllerManager =
+    HelixControllerMain.startHelixController(zkAddress, config.clusterName, "controller",
+        HelixControllerMain.STANDALONE);
+```
+
+### Try it out
+#### Building 
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/user-rebalanced-lock-manager/target/user-rebalanced-lock-manager-pkg/bin
+chmod +x *
+./lock-manager-demo.sh
+```
+
+#### Output
+
+```
+./lock-manager-demo 
+STARTING localhost_12002
+STARTING localhost_12001
+STARTING localhost_12003
+STARTED localhost_12001
+STARTED localhost_12003
+STARTED localhost_12002
+localhost_12003 acquired lock:lock-group_4
+localhost_12002 acquired lock:lock-group_8
+localhost_12001 acquired lock:lock-group_10
+localhost_12001 acquired lock:lock-group_3
+localhost_12001 acquired lock:lock-group_6
+localhost_12003 acquired lock:lock-group_0
+localhost_12002 acquired lock:lock-group_5
+localhost_12001 acquired lock:lock-group_9
+localhost_12002 acquired lock:lock-group_2
+localhost_12003 acquired lock:lock-group_7
+localhost_12003 acquired lock:lock-group_11
+localhost_12002 acquired lock:lock-group_1
+lockName  acquired By
+======================================
+lock-group_0  localhost_12003
+lock-group_1  localhost_12002
+lock-group_10 localhost_12001
+lock-group_11 localhost_12003
+lock-group_2  localhost_12002
+lock-group_3  localhost_12001
+lock-group_4  localhost_12003
+lock-group_5  localhost_12002
+lock-group_6  localhost_12001
+lock-group_7  localhost_12003
+lock-group_8  localhost_12002
+lock-group_9  localhost_12001
+Stopping the first participant
+localhost_12001 Interrupted
+localhost_12002 acquired lock:lock-group_3
+localhost_12003 acquired lock:lock-group_6
+localhost_12003 acquired lock:lock-group_10
+localhost_12002 acquired lock:lock-group_9
+lockName  acquired By
+======================================
+lock-group_0  localhost_12003
+lock-group_1  localhost_12002
+lock-group_10 localhost_12003
+lock-group_11 localhost_12003
+lock-group_2  localhost_12002
+lock-group_3  localhost_12002
+lock-group_4  localhost_12003
+lock-group_5  localhost_12002
+lock-group_6  localhost_12003
+lock-group_7  localhost_12003
+lock-group_8  localhost_12002
+lock-group_9  localhost_12002
+```
+
+Notice that the lock assignment directly follows the assignment generated by the user-defined rebalancer both initially and after a participant is removed from the system.
\ No newline at end of file


[04/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_admin.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_admin.md b/site-releases/trunk/src/site/markdown/tutorial_admin.md
new file mode 100644
index 0000000..f269a4a
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_admin.md
@@ -0,0 +1,407 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Admin Operations</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Admin Operations
+
+Helix provides a set of admin api for cluster management operations. They are supported via:
+
+* _Java API_
+* _Commandline interface_
+* _REST interface via helix-admin-webapp_
+
+### Java API
+See interface [_org.apache.helix.HelixAdmin_](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/HelixAdmin.html)
+
+### Command-line interface
+The command-line tool comes with helix-core package:
+
+Get the command-line tool:
+
+``` 
+  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+  - cd incubator-helix
+  - ./build
+  - cd helix-core/target/helix-core-pkg/bin
+  - chmod +x *.sh
+```
+
+Get help:
+
+```
+  - ./helix-admin.sh --help
+```
+
+All other commands have this form:
+
+```
+  ./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
+```
+
+Admin commands and brief description:
+
+| Command syntax | Description |
+| -------------- | ----------- |
+| _\-\-activateCluster \<clusterName controllerCluster true/false\>_ | Enable/disable a cluster in distributed controller mode |
+| _\-\-addCluster \<clusterName\>_ | Add a new cluster |
+| _\-\-addIdealState \<clusterName resourceName fileName.json\>_ | Add an ideal state to a cluster |
+| _\-\-addInstanceTag \<clusterName instanceName tag\>_ | Add a tag to an instance |
+| _\-\-addNode \<clusterName instanceId\>_ | Add an instance to a cluster |
+| _\-\-addResource \<clusterName resourceName partitionNumber stateModelName\>_ | Add a new resource to a cluster |
+| _\-\-addResourceProperty \<clusterName resourceName propertyName propertyValue\>_ | Add a resource property |
+| _\-\-addStateModelDef \<clusterName fileName.json\>_ | Add a State model definition to a cluster |
+| _\-\-dropCluster \<clusterName\>_ | Delete a cluster |
+| _\-\-dropNode \<clusterName instanceId\>_ | Remove a node from a cluster |
+| _\-\-dropResource \<clusterName resourceName\>_ | Remove an existing resource from a cluster |
+| _\-\-enableCluster \<clusterName true/false\>_ | Enable/disable a cluster |
+| _\-\-enableInstance \<clusterName instanceId true/false\>_ | Enable/disable an instance |
+| _\-\-enablePartition \<true/false clusterName nodeId resourceName partitionName\>_ | Enable/disable a partition |
+| _\-\-getConfig \<configScope configScopeArgs configKeys\>_ | Get user configs |
+| _\-\-getConstraints \<clusterName constraintType\>_ | Get constraints |
+| _\-\-help_ | print help information |
+| _\-\-instanceGroupTag \<instanceTag\>_ | Specify instance group tag, used with rebalance command |
+| _\-\-listClusterInfo \<clusterName\>_ | Show information of a cluster |
+| _\-\-listClusters_ | List all clusters |
+| _\-\-listInstanceInfo \<clusterName instanceId\>_ | Show information of an instance |
+| _\-\-listInstances \<clusterName\>_ | List all instances in a cluster |
+| _\-\-listPartitionInfo \<clusterName resourceName partitionName\>_ | Show information of a partition |
+| _\-\-listResourceInfo \<clusterName resourceName\>_ | Show information of a resource |
+| _\-\-listResources \<clusterName\>_ | List all resources in a cluster |
+| _\-\-listStateModel \<clusterName stateModelName\>_ | Show information of a state model |
+| _\-\-listStateModels \<clusterName\>_ | List all state models in a cluster |
+| _\-\-maxPartitionsPerNode \<maxPartitionsPerNode\>_ | Specify the max partitions per instance, used with addResourceGroup command |
+| _\-\-rebalance \<clusterName resourceName replicas\>_ | Rebalance a resource |
+| _\-\-removeConfig \<configScope configScopeArgs configKeys\>_ | Remove user configs |
+| _\-\-removeConstraint \<clusterName constraintType constraintId\>_ | Remove a constraint |
+| _\-\-removeInstanceTag \<clusterName instanceId tag\>_ | Remove a tag from an instance |
+| _\-\-removeResourceProperty \<clusterName resourceName propertyName\>_ | Remove a resource property |
+| _\-\-resetInstance \<clusterName instanceId\>_ | Reset all erroneous partitions on an instance |
+| _\-\-resetPartition \<clusterName instanceId resourceName partitionName\>_ | Reset an erroneous partition |
+| _\-\-resetResource \<clusterName resourceName\>_ | Reset all erroneous partitions of a resource |
+| _\-\-setConfig \<configScope configScopeArgs configKeyValueMap\>_ | Set user configs |
+| _\-\-setConstraint \<clusterName constraintType constraintId constraintKeyValueMap\>_ | Set a constraint |
+| _\-\-swapInstance \<clusterName oldInstance newInstance\>_ | Swap an old instance with a new instance |
+| _\-\-zkSvr \<ZookeeperServerAddress\>_ | Provide zookeeper address |
+
+### REST interface
+
+The REST interface comes wit helix-admin-webapp package:
+
+``` 
+  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+  - cd incubator-helix 
+  - ./build
+  - cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
+  - chmod +x *.sh
+  - ./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure zookeeper is running
+```
+
+#### URL and support methods
+
+* _/clusters_
+    * List all clusters
+
+    ```
+      curl http://localhost:8100/clusters
+    ```
+
+    * Add a cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
+    ```
+
+* _/clusters/{clusterName}_
+    * List cluster information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster
+    ```
+
+    * Enable/disable a cluster in distributed controller mode
+    
+    ```
+      curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
+    ```
+
+    * Remove a cluster
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster
+    ```
+    
+* _/clusters/{clusterName}/resourceGroups_
+    * List all resources in a cluster
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups
+    ```
+    
+    * Add a resource to cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
+    ```
+
+* _/clusters/{clusterName}/resourceGroups/{resourceName}_
+    * List resource information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+    
+    * Drop a resource
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+
+    * Reset all erroneous partitions of a resource
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+
+* _/clusters/{clusterName}/resourceGroups/{resourceName}/idealState_
+    * Rebalance a resource
+    
+    ```
+      curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+
+    * Add an ideal state
+    
+    ```
+    echo jsonParameters={
+    "command":"addIdealState"
+       }&newIdealState={
+      "id" : "MyDB",
+      "simpleFields" : {
+        "IDEAL_STATE_MODE" : "AUTO",
+        "NUM_PARTITIONS" : "8",
+        "REBALANCE_MODE" : "SEMI_AUTO",
+        "REPLICAS" : "0",
+        "STATE_MODEL_DEF_REF" : "MasterSlave",
+        "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+      },
+      "listFields" : {
+      },
+      "mapFields" : {
+        "MyDB_0" : {
+          "localhost_1001" : "MASTER",
+          "localhost_1002" : "SLAVE"
+        }
+      }
+    }
+    > newIdealState.json
+    curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+    
+    * Add resource property
+    
+    ```
+      curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+    
+* _/clusters/{clusterName}/resourceGroups/{resourceName}/externalView_
+    * Show resource external view
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
+    ```
+* _/clusters/{clusterName}/instances_
+    * List all instances
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/instances
+    ```
+
+    * Add an instance
+    
+    ```
+    curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    ```
+    
+    * Swap an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    ```
+* _/clusters/{clusterName}/instances/{instanceName}_
+    * Show instance information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Enable/disable an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+    * Drop an instance
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Disable/enable partitions on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Reset an erroneous partition on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+    * Reset all erroneous partitions on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+* _/clusters/{clusterName}/configs_
+    * Get user cluster level config
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+    
+    * Set user cluster level config
+    
+    ```
+      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+
+    * Remove user cluster level config
+    
+    ```
+    curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+    
+    * Get/set/remove user participant level config
+    
+    ```
+      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
+    ```
+    
+    * Get/set/remove resource level config
+    
+    ```
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
+    ```
+
+* _/clusters/{clusterName}/controller_
+    * Show controller information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/Controller
+    ```
+    
+    * Enable/disable cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
+    ```
+
+* _/zkPath/{path}_
+    * Get information for zookeeper path
+    
+    ```
+      curl http://localhost:8100/zkPath/MyCluster
+    ```
+
+* _/clusters/{clusterName}/StateModelDefs_
+    * Show all state model definitions
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/StateModelDefs
+    ```
+
+    * Add a state mdoel definition
+    
+    ```
+      echo jsonParameters={
+        "command":"addStateModelDef"
+       }&newStateModelDef={
+          "id" : "OnlineOffline",
+          "simpleFields" : {
+            "INITIAL_STATE" : "OFFLINE"
+          },
+          "listFields" : {
+            "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
+            "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
+          },
+          "mapFields" : {
+            "DROPPED.meta" : {
+              "count" : "-1"
+            },
+            "OFFLINE.meta" : {
+              "count" : "-1"
+            },
+            "OFFLINE.next" : {
+              "DROPPED" : "DROPPED",
+              "ONLINE" : "ONLINE"
+            },
+            "ONLINE.meta" : {
+              "count" : "R"
+            },
+            "ONLINE.next" : {
+              "DROPPED" : "OFFLINE",
+              "OFFLINE" : "OFFLINE"
+            }
+          }
+        }
+        > newStateModelDef.json
+        curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
+    ```
+
+* _/clusters/{clusterName}/StateModelDefs/{stateModelDefName}_
+    * Show a state model definition
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
+    ```
+
+* _/clusters/{clusterName}/constraints/{constraintType}_
+    * Show all contraints
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
+    ```
+
+    * Set a contraint
+    
+    ```
+       curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    ```
+    
+    * Remove a constraint
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    ```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_controller.md b/site-releases/trunk/src/site/markdown/tutorial_controller.md
new file mode 100644
index 0000000..1a4cc45
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_controller.md
@@ -0,0 +1,79 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Controller</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Controller
+
+Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
+
+### Start the Helix Agent
+
+
+It requires the following parameters:
+ 
+* clusterId: A logical ID to represent the group of nodes
+* controllerId: A logical ID of the process creating the controller instance. Generally this is host:port.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+
+```
+HelixConnection connection = new ZKHelixConnection(zkConnectString);
+HelixController controller = connection.createController(clusterId, controllerId);
+```
+
+### Controller Code
+
+The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
+If you need additional functionality, see GenericHelixController and ZKHelixController for how to configure the pipeline.
+
+```
+HelixConnection connection = new ZKHelixConnection(zkConnectString);
+HelixController controller = connection.createController(clusterId, controllerId);
+controller.startAsync();
+```
+The snippet above shows how the controller is started. You can also start the controller using command line interface.
+  
+```
+cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+```
+
+### Controller deployment modes
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters. 
+
+For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_health.md b/site-releases/trunk/src/site/markdown/tutorial_health.md
new file mode 100644
index 0000000..e1a7f3c
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_health.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Customizing Heath Checks</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Customizing Health Checks
+
+In this chapter, we\'ll learn how to customize the health check, based on metrics of your distributed system.  
+
+### Health Checks
+
+Note: _this in currently in development mode, not yet ready for production._
+
+Helix provides the ability for each node in the system to report health metrics on a periodic basis. 
+
+Helix supports multiple ways to aggregate these metrics:
+
+* SUM
+* AVG
+* EXPONENTIAL DECAY
+* WINDOW
+
+Helix persists the aggregated value only.
+
+Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert. 
+Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
+This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_messaging.md b/site-releases/trunk/src/site/markdown/tutorial_messaging.md
new file mode 100644
index 0000000..4bdce0e
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_messaging.md
@@ -0,0 +1,71 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Messaging</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Messaging
+
+In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature which is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.  
+
+### Example: Bootstrapping a Replica
+
+Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
+
+Helix provides a messaging API for intra-cluster communication between nodes in the system.  Helix provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
+Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
+System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
+
+```
+      ClusterMessagingService messagingService = manager.getMessagingService();
+
+      // Construct the Message
+      Message requestBackupUriRequest = new Message(
+          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+      requestBackupUriRequest
+          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+      requestBackupUriRequest.setMsgState(MessageState.NEW);
+
+      // Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
+      Criteria recipientCriteria = new Criteria();
+      recipientCriteria.setInstanceName("%");
+      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      recipientCriteria.setResource("MyDB");
+      recipientCriteria.setPartition("");
+
+      // Should be processed only by process(es) that are active at the time of sending the message
+      //   This means if the recipient is restarted after message is sent, it will not be processe.
+      recipientCriteria.setSessionSpecific(true);
+
+      // wait for 30 seconds
+      int timeout = 30000;
+
+      // the handler that will be invoked when any recipient responds to the message.
+      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+
+      // this will return only after all recipients respond or after timeout
+      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+          requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.DefaultMessagingService in [Javadocs](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_participant.md b/site-releases/trunk/src/site/markdown/tutorial_participant.md
new file mode 100644
index 0000000..da55cbd
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_participant.md
@@ -0,0 +1,97 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Participant</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Participant
+
+In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
+
+
+### Start the Helix Agent
+
+The Helix agent is a common component that connects each system component with the controller.
+
+It requires the following parameters:
+ 
+* clusterId: A logical ID to represent the group of nodes
+* participantId: A logical ID of the process creating the manager instance. Generally this is host:port.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+
+After the Helix participant instance is created, only thing that needs to be registered is the state model factory. 
+The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
+
+* MasterSlaveStateModelFactory
+* LeaderStandbyStateModelFactory
+* BootstrapHandler
+* _An application defined state model factory_
+
+
+```
+HelixConnection connection = new ZKHelixConnection(zkConnectString);
+HelixParticipant participant = connection.createParticipant(clusterId, participantId);
+StateMachineEngine stateMach = participant.getStateMachineEngine();
+
+// create a stateModelFactory that returns a statemodel object for each partition. 
+StateModelFactory<StateModel> stateModelFactory = new OnlineOfflineStateModelFactory();     
+stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
+participant.startAsync();
+```
+
+Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
+
+```
+public class OnlineOfflineStateModelFactory extends StateModelFactory<StateModel> {
+  @Override
+  public StateModel createNewStateModel(String stateUnitKey) {
+    OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
+    return stateModel;
+  }
+  @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
+  public static class OnlineOfflineStateModel extends StateModel {
+
+    @Transition(from = "OFFLINE", to = "ONLINE")
+    public void onBecomeOnlineFromOffline(Message message,
+        NotificationContext context) {
+
+      System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might start a service, run initialization, etc                            //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+    }
+
+    @Transition(from = "ONLINE", to = "OFFLINE")
+    public void onBecomeOfflineFromOnline(Message message,
+        NotificationContext context) {
+
+      System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might shutdown a service, log this event, or change monitoring settings   //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+    }
+  }
+}
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_propstore.md b/site-releases/trunk/src/site/markdown/tutorial_propstore.md
new file mode 100644
index 0000000..ec0d71b
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_propstore.md
@@ -0,0 +1,34 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Application Property Store</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Application Property Store
+
+In this chapter, we\'ll learn how to use the application property store.
+
+### Property Store
+
+It is common that an application needs support for distributed, shared data structures.  Helix uses Zookeeper to store the application data and hence provides notifications when the data changes.
+
+While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
+
+See [HelixManager.getHelixPropertyStore](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_rebalance.md b/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
new file mode 100644
index 0000000..8f42a5a
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_rebalance.md
@@ -0,0 +1,181 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Rebalancing Algorithms</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
+
+The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
+
+This means given a number of partitions, replicas and number of nodes, Helix does the automatic assignment of partition to nodes such that:
+
+* Each node has the same number of partitions
+* Replicas of the same partition do not stay on the same node
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above criteria
+
+Helix employs a rebalancing algorithm to compute the _ideal state_ of the system.  When the _current state_ differs from the _ideal state_, Helix uses it as the target state of the system and computes the appropriate transitions needed to bring it to the _ideal state_.
+
+Helix makes it easy to perform this operation, while giving you control over the algorithm.  In this section, we\'ll see how to implement the desired behavior.
+
+Helix has four options for rebalancing, in increasing order of customization by the system builder:
+
+* FULL_AUTO
+* SEMI_AUTO
+* CUSTOMIZED
+* USER_DEFINED
+
+```
+            |FULL_AUTO     |  SEMI_AUTO | CUSTOMIZED|  USER_DEFINED  |
+            ---------------------------------------------------------|
+   LOCATION | HELIX        |  APP       |  APP      |      APP       |
+            ---------------------------------------------------------|
+      STATE | HELIX        |  HELIX     |  APP      |      APP       |
+            ----------------------------------------------------------
+```
+
+
+### FULL_AUTO
+
+When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. 
+
+For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "FULL_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally.  The ideal state is therefore:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node.. 
+
+#### SEMI_AUTO
+
+When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
+
+Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2.  The choice of _state_ is still controlled by Helix.  That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+
+The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
+
+In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints. 
+
+#### CUSTOMIZED
+
+Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes. 
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
+
+Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "CUSTOMIZED",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2. 
+
+#### USER_DEFINED
+
+For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
+
+#### Backwards Compatibility
+
+In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_spectator.md b/site-releases/trunk/src/site/markdown/tutorial_spectator.md
new file mode 100644
index 0000000..24c1cf4
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_spectator.md
@@ -0,0 +1,76 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Spectator</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Spectator
+
+Next, we\'ll learn how to implement a Spectator.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
+
+### Start the Helix agent
+
+Same as for a Participant, The Helix agent is the common component that connects each system component with the controller.
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
+    * PARTICIPANT: Process that performs the actual task in the distributed system.
+    * SPECTATOR: Process that observes the changes in the cluster.
+    * ADMIN: To carry out system admin actions.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
+
+After the Helix manager instance is created, only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
+
+### Spectator Code
+
+A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
+Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.PARTICIPANT,
+                                                          zkConnectString);
+manager.connect();
+RoutingTableProvider routingTableProvider = new RoutingTableProvider();
+manager.addExternalViewChangeListener(routingTableProvider);
+```
+
+In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
+
+```
+## instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
+instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
+
+////////////////////////////////////////////////////////////////////////////////////////////////
+// Application-specific code to send a request to one of the instances                        //
+////////////////////////////////////////////////////////////////////////////////////////////////
+
+theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
+result = theInstance.sendRequest(yourApplicationRequest, responseObject);
+
+```
+
+When the external view changes, the application needs to react by sending requests to a different instance.  
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_state.md b/site-releases/trunk/src/site/markdown/tutorial_state.md
new file mode 100644
index 0000000..4f7b1b5
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_state.md
@@ -0,0 +1,131 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - State Machine Configuration</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): State Machine Configuration
+
+In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
+
+## State Models
+
+Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster. 
+Every resource that is added should be configured to use a state model that govern its _ideal state_.
+
+### MASTER-SLAVE
+
+* 3 states: OFFLINE, SLAVE, MASTER
+* Maximum number of masters: 1
+* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
+
+
+### ONLINE-OFFLINE
+
+* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
+
+### LEADER-STANDBY
+
+* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
+
+## Constraints
+
+In addition to the state machine configuration, one can specify the constraints of states and transitions.
+
+For example, one can say:
+
+* MASTER:1
+<br/>Maximum number of replicas in MASTER state at any time is 1
+
+* OFFLINE-SLAVE:5 
+<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
+
+### Dynamic State Constraints
+
+We also support two dynamic upper bounds for the number of replicas in each state:
+
+* N: The number of replicas in the state is at most the number of live participants in the cluster
+* R: The number of replicas in the state is at most the specified replica count for the partition
+
+### State Priority
+
+Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
+
+### State Transition Priority
+
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
+
+## Special States
+
+### DROPPED
+
+The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
+
+* The DROPPED state must be defined
+* There must be a path to DROPPED for every state in the model
+
+### ERROR
+
+The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
+
+## Annotated Example
+
+Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
+
+```
+StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
+  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
+  .initialState("OFFLINE")
+
+  // Lowest number here indicates highest priority, no value indicates lowest priority
+  .addState("MASTER", 1)
+  .addState("SLAVE", 2)
+  .addState("OFFLINE")
+
+  // Note the special inclusion of the DROPPED state (REQUIRED)
+  .addState(HelixDefinedState.DROPPED.toString())
+
+  // No more than one master allowed
+  .upperBound("MASTER", 1)
+
+  // R indicates an upper bound of number of replicas for each partition
+  .dynamicUpperBound("SLAVE", "R")
+
+  // Add some high-priority transitions
+  .addTransition("SLAVE", "MASTER", 1)
+  .addTransition("OFFLINE", "SLAVE", 2)
+
+  // Using the same priority value indicates that these transitions can fire in any order
+  .addTransition("MASTER", "SLAVE", 3)
+  .addTransition("SLAVE", "OFFLINE", 3)
+
+  // Not specifying a value defaults to lowest priority
+  // Notice the inclusion of the OFFLINE to DROPPED transition
+  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
+  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
+
+  // Create the StateModelDefinition instance
+  .build();
+
+  // Use the isValid() function to make sure the StateModelDefinition will work without issues
+  Assert.assertTrue(stateModel.isValid());
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_throttling.md b/site-releases/trunk/src/site/markdown/tutorial_throttling.md
new file mode 100644
index 0000000..7417979
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_throttling.md
@@ -0,0 +1,38 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Throttling</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Throttling
+
+In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge is capable of coordinating this decision.
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight, but some might involve moving data, which is quite expensive from a network and IOPS perspective.
+
+Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per-node maximum transitions in parallel
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_user_def_rebalancer.md b/site-releases/trunk/src/site/markdown/tutorial_user_def_rebalancer.md
new file mode 100644
index 0000000..6246f68
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_user_def_rebalancer.md
@@ -0,0 +1,227 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - User-Defined Rebalancing</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): User-Defined Rebalancing
+
+Even though Helix can compute both the location and the state of replicas internally using a default fully-automatic rebalancer, specific applications may require rebalancing strategies that optimize for different requirements. Thus, Helix allows applications to plug in arbitrary rebalancer algorithms that implement a provided interface. One of the main design goals of Helix is to provide maximum flexibility to any distributed application. Thus, it allows applications to fully implement the rebalancer, which is the core constraint solver in the system, if the application developer so chooses.
+
+Whenever the state of the cluster changes, as is the case when participants join or leave the cluster, Helix automatically calls the rebalancer to compute a new mapping of all the replicas in the resource. When using a pluggable rebalancer, the only required step is to register it with Helix. Subsequently, no additional bootstrapping steps are necessary. Helix uses reflection to look up and load the class dynamically at runtime. As a result, it is also technically possible to change the rebalancing strategy used at any time.
+
+The [HelixRebalancer](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html) interface is as follows:
+
+```
+public void init(HelixManager helixManager);
+
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
+    ResourceCurrentState currentState);
+```
+The first parameter is a configuration of the resource to rebalance, the second is a full cache of all of the cluster data available to Helix, and the third is a snapshot of the actual current placements and state assignments. From the cluster variable, it is also possible to access the ResourceAssignment last generated by this rebalancer. Internally, Helix implements the same interface for its own rebalancing routines, so a user-defined rebalancer will be cognizant of the same information about the cluster as an internal implementation. Helix strives to provide applications the ability to implement algorithms that may require a large portion of the entire state of the cluster to make the best placement and state assignment decisions possible.
+
+A ResourceAssignment is a full representation of the location and the state of each replica of each partition of a given resource. This is a simple representation of the placement that the algorithm believes is the best possible. If the placement meets all defined constraints, this is what will become the actual state of the distributed system.
+
+### Rebalancer Context
+
+Helix provides an interface called [RebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/RebalancerContext.html). For each of the four main [rebalancing modes](./tutorial_rebalance.html), there is a base class called [PartitionedRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/PartitionedRebalancerContext.html), which contains all of the basic properties required for a partitioned resource. Helix provides three derived classes for PartitionedRebalancerContext: FullAutoRebalancerContext, SemiAutoRebalancerContext, and CustomizedRebalancerContext. If none of these work for your application, you can create your own class that derives PartiitonedRebalancerContext (or even only implements RebalancerContext).
+
+### Specifying a Rebalancer
+
+#### Using Logical Accessors
+To specify the rebalancer, one can use ```PartitionedRebalancerContext#setRebalancerRef(RebalancerRef)``` to specify the specific implementation of the rebalancerClass. For example, here's a base constructed PartitionedRebalancerContext with a user-specified class:
+
+```
+RebalancerRef rebalancerRef = RebalancerRef.from(className);
+PartitionedRebalancerContext rebalanceContext =
+    new PartitionedRebalancerContext.Builder(resourceId).replicaCount(1).addPartition(partition1)
+        .addPartition(partition2).stateModelDefId(stateModelDef.getStateModelDefId())
+        .rebalancerRef(rebalancerRef).build();
+```
+
+The class name is a fully-qualified class name consisting of its package and its name, and the class should implement the Rebalancer interface. Now, the context can be added to a ResourceConfig through ```ResourceConfig.Builder#rebalancerContext(RebalancerContext)``` and the context will automatically be made available to the rebalancer for all subsequent executions.
+
+#### Using HelixAdmin
+For implementations that set up the cluster through existing code, the following HelixAdmin calls will update the Rebalancer class:
+
+```
+IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
+idealState.setRebalanceMode(RebalanceMode.USER_DEFINED);
+idealState.setRebalancerClassName(className);
+helixAdmin.setResourceIdealState(clusterName, resourceName, idealState);
+```
+There are two key fields to set to specify that a pluggable rebalancer should be used. First, the rebalance mode should be set to USER_DEFINED, and second the rebalancer class name should be set to a class that implements Rebalancer and is within the scope of the project. The class name is a fully-qualified class name consisting of its package and its name.
+
+#### Using YAML
+Alternatively, the rebalancer class name can be specified in a YAML file representing the cluster configuration. The requirements are the same, but the representation is more compact. Below are the first few lines of an example YAML file. To see a full YAML specification, see the [YAML tutorial](./tutorial_yaml.html).
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster
+resources:
+  - name: lock-group # unique resource name
+    rebalancer: # we will provide our own rebalancer
+      mode: USER_DEFINED
+      class: domain.project.helix.rebalancer.UserDefinedRebalancerClass
+...
+```
+
+### Example
+We demonstrate plugging in a simple user-defined rebalancer as part of a revisit of the [distributed lock manager](./recipes/user_def_rebalancer.html) example. It includes a functional Rebalancer implementation, as well as the entire YAML file used to define the cluster.
+
+Consider the case where partitions are locks in a lock manager and 6 locks are to be distributed evenly to a set of participants, and only one participant can hold each lock. We can define a rebalancing algorithm that simply takes the modulus of the lock number and the number of participants to evenly distribute the locks across participants. Helix allows capping the number of partitions a participant can accept, but since locks are lightweight, we do not need to define a restriction in this case. The following is a succinct implementation of this algorithm.
+
+```
+@Override
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
+    ResourceCurrentState currentState) {
+  // Get the rebalcancer context (a basic partitioned one)
+  PartitionedRebalancerContext context = rebalancerConfig.getRebalancerContext(
+      PartitionedRebalancerContext.class);
+
+  // Initialize an empty mapping of locks to participants
+  ResourceAssignment assignment = new ResourceAssignment(context.getResourceId());
+
+  // Get the list of live participants in the cluster
+  List<ParticipantId> liveParticipants = new ArrayList<ParticipantId>(
+      cluster.getLiveParticipantMap().keySet());
+
+  // Get the state model (should be a simple lock/unlock model) and the highest-priority state
+  StateModelDefId stateModelDefId = context.getStateModelDefId();
+  StateModelDefinition stateModelDef = cluster.getStateModelMap().get(stateModelDefId);
+  if (stateModelDef.getStatesPriorityList().size() < 1) {
+    LOG.error("Invalid state model definition. There should be at least one state.");
+    return assignment;
+  }
+  State lockState = stateModelDef.getTypedStatesPriorityList().get(0);
+
+  // Count the number of participants allowed to lock each lock
+  String stateCount = stateModelDef.getNumParticipantsPerState(lockState);
+  int lockHolders = 0;
+  try {
+    // a numeric value is a custom-specified number of participants allowed to lock the lock
+    lockHolders = Integer.parseInt(stateCount);
+  } catch (NumberFormatException e) {
+    LOG.error("Invalid state model definition. The lock state does not have a valid count");
+    return assignment;
+  }
+
+  // Fairly assign the lock state to the participants using a simple mod-based sequential
+  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
+  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
+  // number of participants as necessary.
+  // This assumes a simple lock-unlock model where the only state of interest is which nodes have
+  // acquired each lock.
+  int i = 0;
+  for (PartitionId partition : context.getPartitionSet()) {
+    Map<ParticipantId, State> replicaMap = new HashMap<ParticipantId, State>();
+    for (int j = i; j < i + lockHolders; j++) {
+      int participantIndex = j % liveParticipants.size();
+      ParticipantId participant = liveParticipants.get(participantIndex);
+      // enforce that a participant can only have one instance of a given lock
+      if (!replicaMap.containsKey(participant)) {
+        replicaMap.put(participant, lockState);
+      }
+    }
+    assignment.addReplicaMap(partition, replicaMap);
+    i++;
+  }
+  return assignment;
+}
+```
+
+Here is the ResourceAssignment emitted by the user-defined rebalancer for a 3-participant system whenever there is a change to the set of participants.
+
+* Participant_A joins
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_A": "LOCKED"},
+  "lock_2": { "Participant_A": "LOCKED"},
+  "lock_3": { "Participant_A": "LOCKED"},
+  "lock_4": { "Participant_A": "LOCKED"},
+  "lock_5": { "Participant_A": "LOCKED"},
+}
+```
+
+A ResourceAssignment is a mapping for each resource of partition to the participant serving each replica and the state of each replica. The state model is a simple LOCKED/RELEASED model, so participant A holds all lock partitions in the LOCKED state.
+
+* Participant_B joins
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_B": "LOCKED"},
+  "lock_2": { "Participant_A": "LOCKED"},
+  "lock_3": { "Participant_B": "LOCKED"},
+  "lock_4": { "Participant_A": "LOCKED"},
+  "lock_5": { "Participant_B": "LOCKED"},
+}
+```
+
+Now that there are two participants, the simple mod-based function assigns every other lock to the second participant. On any system change, the rebalancer is invoked so that the application can define how to redistribute its resources.
+
+* Participant_C joins (steady state)
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_B": "LOCKED"},
+  "lock_2": { "Participant_C": "LOCKED"},
+  "lock_3": { "Participant_A": "LOCKED"},
+  "lock_4": { "Participant_B": "LOCKED"},
+  "lock_5": { "Participant_C": "LOCKED"},
+}
+```
+
+This is the steady state of the system. Notice that four of the six locks now have a different owner. That is because of the naïve modulus-based assignmemt approach used by the user-defined rebalancer. However, the interface is flexible enough to allow you to employ consistent hashing or any other scheme if minimal movement is a system requirement.
+
+* Participant_B fails
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_C": "LOCKED"},
+  "lock_2": { "Participant_A": "LOCKED"},
+  "lock_3": { "Participant_C": "LOCKED"},
+  "lock_4": { "Participant_A": "LOCKED"},
+  "lock_5": { "Participant_C": "LOCKED"},
+}
+```
+
+On any node failure, as in the case of node addition, the rebalancer is invoked automatically so that it can generate a new mapping as a response to the change. Helix ensures that the Rebalancer has the opportunity to reassign locks as required by the application.
+
+* Participant_B (or the replacement for the original Participant_B) rejoins
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_B": "LOCKED"},
+  "lock_2": { "Participant_C": "LOCKED"},
+  "lock_3": { "Participant_A": "LOCKED"},
+  "lock_4": { "Participant_B": "LOCKED"},
+  "lock_5": { "Participant_C": "LOCKED"},
+}
+```
+
+The rebalancer was invoked once again and the resulting ResourceAssignment reflects the steady state.
+
+### Caveats
+- The rebalancer class must be available at runtime, or else Helix will not attempt to rebalance at all
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_yaml.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_yaml.md b/site-releases/trunk/src/site/markdown/tutorial_yaml.md
new file mode 100644
index 0000000..0f8e0cc
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_yaml.md
@@ -0,0 +1,102 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - YAML Cluster Setup</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): YAML Cluster Setup
+
+As an alternative to using Helix Admin to set up the cluster, its resources, constraints, and the state model, Helix supports bootstrapping a cluster configuration based on a YAML file. Below is an annotated example of such a file for a simple distributed lock manager where a lock can only be LOCKED or RELEASED, and each lock only allows a single participant to hold it in the LOCKED state.
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster (required)
+resources:
+  - name: lock-group # unique resource name (required)
+    rebalancer: # required
+      mode: USER_DEFINED # required - USER_DEFINED means we will provide our own rebalancer
+      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer # required for USER_DEFINED
+    partitions:
+      count: 12 # number of partitions for the resource (default is 1)
+      replicas: 1 # number of replicas per partition (default is 1)
+    stateModel:
+      name: lock-unlock # model name (required)
+      states: [LOCKED, RELEASED, DROPPED] # the list of possible states (required if model not built-in)
+      transitions: # the list of possible transitions (required if model not built-in)
+        - name: Unlock
+          from: LOCKED
+          to: RELEASED
+        - name: Lock
+          from: RELEASED
+          to: LOCKED
+        - name: DropLock
+          from: LOCKED
+          to: DROPPED
+        - name: DropUnlock
+          from: RELEASED
+          to: DROPPED
+        - name: Undrop
+          from: DROPPED
+          to: RELEASED
+      initialState: RELEASED # (required if model not built-in)
+    constraints:
+      state:
+        counts: # maximum number of replicas of a partition that can be in each state (required if model not built-in)
+          - name: LOCKED
+            count: "1"
+          - name: RELEASED
+            count: "-1"
+          - name: DROPPED
+            count: "-1"
+        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority (all priorities equal if not specified)
+      transition: # transitions priority to enforce order that transitions occur
+        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock] # all priorities equal if not specified
+participants: # list of nodes that can serve replicas (optional if dynamic joining is active, required otherwise)
+  - name: localhost_12001
+    host: localhost
+    port: 12001
+  - name: localhost_12002
+    host: localhost
+    port: 12002
+  - name: localhost_12003
+    host: localhost
+    port: 12003
+```
+
+Using a file like the one above, the cluster can be set up either with the command line:
+
+```
+incubator-helix/helix-core/target/helix-core/pkg/bin/YAMLClusterSetup.sh localhost:2199 lock-manager-config.yaml
+```
+
+or with code:
+
+```
+YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
+InputStream input =
+    Thread.currentThread().getContextClassLoader()
+        .getResourceAsStream("lock-manager-config.yaml");
+YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
+```
+
+Some notes:
+
+- A rebalancer class is only required for the USER_DEFINED mode. It is ignored otherwise.
+
+- Built-in state models, like OnlineOffline, LeaderStandby, and MasterSlave, or state models that have already been added only require a name for stateModel. If partition and/or replica counts are not provided, a value of 1 is assumed.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/.htaccess
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/.htaccess b/site-releases/trunk/src/site/resources/.htaccess
new file mode 100644
index 0000000..d5c7bf3
--- /dev/null
+++ b/site-releases/trunk/src/site/resources/.htaccess
@@ -0,0 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+Redirect /download.html /download.cgi

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/download.cgi
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/download.cgi b/site-releases/trunk/src/site/resources/download.cgi
new file mode 100644
index 0000000..f9a0e30
--- /dev/null
+++ b/site-releases/trunk/src/site/resources/download.cgi
@@ -0,0 +1,22 @@
+#!/bin/sh
+# Just call the standard mirrors.cgi script. It will use download.html
+# as the input template.
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+exec /www/www.apache.org/dyn/mirrors/mirrors.cgi $*

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/HELIX-components.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/HELIX-components.png b/site-releases/trunk/src/site/resources/images/HELIX-components.png
new file mode 100644
index 0000000..c0c35ae
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/HELIX-components.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/PFS-Generic.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/PFS-Generic.png b/site-releases/trunk/src/site/resources/images/PFS-Generic.png
new file mode 100644
index 0000000..7eea3a0
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/PFS-Generic.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/RSYNC_BASED_PFS.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/RSYNC_BASED_PFS.png b/site-releases/trunk/src/site/resources/images/RSYNC_BASED_PFS.png
new file mode 100644
index 0000000..0cc55ae
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/RSYNC_BASED_PFS.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/bootstrap_statemodel.gif
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/bootstrap_statemodel.gif b/site-releases/trunk/src/site/resources/images/bootstrap_statemodel.gif
new file mode 100644
index 0000000..b8f8a42
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/bootstrap_statemodel.gif differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/helix-architecture.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/helix-architecture.png b/site-releases/trunk/src/site/resources/images/helix-architecture.png
new file mode 100644
index 0000000..6f69a2d
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/helix-architecture.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/helix-logo.jpg
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/helix-logo.jpg b/site-releases/trunk/src/site/resources/images/helix-logo.jpg
new file mode 100644
index 0000000..d6428f6
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/helix-logo.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/helix-znode-layout.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/helix-znode-layout.png b/site-releases/trunk/src/site/resources/images/helix-znode-layout.png
new file mode 100644
index 0000000..5bafc45
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/helix-znode-layout.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/statemachine.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/statemachine.png b/site-releases/trunk/src/site/resources/images/statemachine.png
new file mode 100644
index 0000000..43d27ec
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/statemachine.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/resources/images/system.png
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/resources/images/system.png b/site-releases/trunk/src/site/resources/images/system.png
new file mode 100644
index 0000000..f8a05c8
Binary files /dev/null and b/site-releases/trunk/src/site/resources/images/system.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/site.xml
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/site.xml b/site-releases/trunk/src/site/site.xml
new file mode 100644
index 0000000..52b9f8a
--- /dev/null
+++ b/site-releases/trunk/src/site/site.xml
@@ -0,0 +1,118 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project name="Apache Helix">
+  <bannerLeft>
+    <src>images/helix-logo.jpg</src>
+    <href>http://helix.incubator.apache.org/site-releases/0.7.0-incubating-site</href>
+  </bannerLeft>
+  <bannerRight>
+    <src>http://incubator.apache.org/images/egg-logo.png</src>
+    <href>http://incubator.apache.org/</href>
+  </bannerRight>
+  <version position="none"/>
+
+  <publishDate position="right"/>
+
+  <skin>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-fluido-skin</artifactId>
+    <version>1.3.0</version>
+  </skin>
+
+  <body>
+
+    <head>
+      <script type="text/javascript">
+
+        var _gaq = _gaq || [];
+        _gaq.push(['_setAccount', 'UA-3211522-12']);
+        _gaq.push(['_trackPageview']);
+
+        (function() {
+        var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+        ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+        var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+        })();
+
+      </script>
+
+    </head>
+
+    <breadcrumbs position="left">
+      <item name="Apache Helix" href="http://helix.incubator.apache.org/"/>
+      <item name="trunk" href="http://helix.incubator.apache.org/site-releases/trunk-site/"/>
+    </breadcrumbs>
+
+    <menu name="Apache Helix">
+      <item name="Home" href="../../index.html"/>
+    </menu>
+
+    <menu name="Helix Trunk">
+      <item name="Introduction" href="./index.html"/>
+      <item name="Getting Helix" href="./Building.html"/>
+      <item name="Core concepts" href="./Concepts.html"/>
+      <item name="Architecture" href="./Architecture.html"/>
+      <item name="Quick Start" href="./Quickstart.html"/>
+      <item name="Tutorial" href="./Tutorial.html"/>
+    </menu>
+
+    <menu name="Recipes">
+      <item name="Distributed lock manager" href="./recipes/lock_manager.html"/>
+      <item name="Rabbit MQ consumer group" href="./recipes/rabbitmq_consumer_group.html"/>
+      <item name="Rsync replicated file store" href="./recipes/rsync_replicated_file_store.html"/>
+      <item name="Service Discovery" href="./recipes/service_discovery.html"/>
+      <item name="Distributed task DAG Execution" href="./recipes/task_dag_execution.html"/>
+      <item name="User-defined rebalancer" href="./recipes/user_def_rebalancer.html"/>
+    </menu>
+<!--
+    <menu ref="reports" inherit="bottom"/>
+    <menu ref="modules" inherit="bottom"/>
+
+
+    <menu name="ASF">
+      <item name="How Apache Works" href="http://www.apache.org/foundation/how-it-works.html"/>
+      <item name="Foundation" href="http://www.apache.org/foundation/"/>
+      <item name="Sponsoring Apache" href="http://www.apache.org/foundation/sponsorship.html"/>
+      <item name="Thanks" href="http://www.apache.org/foundation/thanks.html"/>
+    </menu>
+-->
+    <footer>
+      <div class="row span16"><div>Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation.
+        All other marks mentioned may be trademarks or registered trademarks of their respective owners.</div>
+        <a href="${project.url}/privacy-policy.html">Privacy Policy</a>
+      </div>
+    </footer>
+
+
+  </body>
+
+  <custom>
+    <fluidoSkin>
+      <topBarEnabled>true</topBarEnabled>
+      <!-- twitter link work only with sidebar disabled -->
+      <sideBarEnabled>true</sideBarEnabled>
+      <googleSearch></googleSearch>
+      <twitter>
+        <user>ApacheHelix</user>
+        <showUser>true</showUser>
+        <showFollowers>false</showFollowers>
+      </twitter>
+    </fluidoSkin>
+  </custom>
+
+</project>


[02/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/recipes/lock_manager.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/recipes/lock_manager.md b/src/site/markdown/recipes/lock_manager.md
deleted file mode 100644
index 252ace7..0000000
--- a/src/site/markdown/recipes/lock_manager.md
+++ /dev/null
@@ -1,253 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-Distributed lock manager
-------------------------
-Distributed locks are used to synchronize accesses shared resources. Most applications use Zookeeper to model the distributed locks. 
-
-The simplest way to model a lock using zookeeper is (See Zookeeper leader recipe for an exact and more advanced solution)
-
-* Each process tries to create an emphemeral node.
-* If can successfully create it then, it acquires the lock
-* Else it will watch on the znode and try to acquire the lock again if the current lock holder disappears 
-
-This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in
-
-* Uneven distribution of locks among nodes, the node that starts first will acquire all the lock. Nodes that start later will be idle.
-* When a node fails, how the locks will be distributed among remaining nodes is not predicable. 
-* When new nodes are added the current nodes dont relinquish the locks so that new nodes can acquire some locks
-
-In other words we want a system to satisfy the following requirements.
-
-* Distribute locks evenly among all nodes to get better hardware utilization
-* If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
-* If nodes are added, locks must be evenly re-distributed among nodes.
-
-Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied. 
-
-To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
-
-----------------------------------------------------------------------------------------
-
-#### Short version
- This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
- 
-```
-git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-cd incubator-helix
-mvn clean install package -DskipTests
-cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin
-chmod +x *
-./lock-manager-demo
-```
-
-##### Output
-
-```
-./lock-manager-demo 
-STARTING localhost_12000
-STARTING localhost_12002
-STARTING localhost_12001
-STARTED localhost_12000
-STARTED localhost_12002
-STARTED localhost_12001
-localhost_12001 acquired lock:lock-group_3
-localhost_12000 acquired lock:lock-group_8
-localhost_12001 acquired lock:lock-group_2
-localhost_12001 acquired lock:lock-group_4
-localhost_12002 acquired lock:lock-group_1
-localhost_12002 acquired lock:lock-group_10
-localhost_12000 acquired lock:lock-group_7
-localhost_12001 acquired lock:lock-group_5
-localhost_12002 acquired lock:lock-group_11
-localhost_12000 acquired lock:lock-group_6
-localhost_12002 acquired lock:lock-group_0
-localhost_12000 acquired lock:lock-group_9
-lockName    acquired By
-======================================
-lock-group_0    localhost_12002
-lock-group_1    localhost_12002
-lock-group_10    localhost_12002
-lock-group_11    localhost_12002
-lock-group_2    localhost_12001
-lock-group_3    localhost_12001
-lock-group_4    localhost_12001
-lock-group_5    localhost_12001
-lock-group_6    localhost_12000
-lock-group_7    localhost_12000
-lock-group_8    localhost_12000
-lock-group_9    localhost_12000
-Stopping localhost_12000
-localhost_12000 Interrupted
-localhost_12001 acquired lock:lock-group_9
-localhost_12001 acquired lock:lock-group_8
-localhost_12002 acquired lock:lock-group_6
-localhost_12002 acquired lock:lock-group_7
-lockName    acquired By
-======================================
-lock-group_0    localhost_12002
-lock-group_1    localhost_12002
-lock-group_10    localhost_12002
-lock-group_11    localhost_12002
-lock-group_2    localhost_12001
-lock-group_3    localhost_12001
-lock-group_4    localhost_12001
-lock-group_5    localhost_12001
-lock-group_6    localhost_12002
-lock-group_7    localhost_12002
-lock-group_8    localhost_12001
-lock-group_9    localhost_12001
-
-```
-
-----------------------------------------------------------------------------------------
-
-#### Long version
-This provides more details on how to setup the cluster and where to plugin application code.
-
-##### start zookeeper
-
-```
-./start-standalone-zookeeper 2199
-```
-
-##### Create a cluster
-
-```
-./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
-```
-
-##### Create a lock group
-
-Create a lock group and specify the number of locks in the lock group. 
-
-```
-./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline FULL_AUTO
-```
-
-##### Start the nodes
-
-Create a Lock class that handles the callbacks. 
-
-```
-
-public class Lock extends StateModel
-{
-  private String lockName;
-
-  public Lock(String lockName)
-  {
-    this.lockName = lockName;
-  }
-
-  public void lock(Message m, NotificationContext context)
-  {
-    System.out.println(" acquired lock:"+ lockName );
-  }
-
-  public void release(Message m, NotificationContext context)
-  {
-    System.out.println(" releasing lock:"+ lockName );
-  }
-
-}
-
-```
-
-LockFactory that creates the lock
- 
-```
-public class LockFactory extends StateModelFactory<Lock>{
-    
-    /* Instantiates the lock handler, one per lockName*/
-    public Lock create(String lockName)
-    {
-        return new Lock(lockName);
-    }   
-}
-```
-
-At node start up, simply join the cluster and helix will invoke the appropriate callbacks on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
-
-```
-public class LockProcess{
-
-  public static void main(String args){
-    String zkAddress= "localhost:2199";
-    String clusterName = "lock-manager-demo";
-    //Give a unique id to each process, most commonly used format hostname_port
-    String instanceName ="localhost_12000";
-    ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
-    //configure the instance and provide some metadata 
-    InstanceConfig config = new InstanceConfig(instanceName);
-    config.setHostName("localhost");
-    config.setPort("12000");
-    admin.addInstance(clusterName, config);
-    //join the cluster
-    HelixManager manager;
-    manager = HelixManagerFactory.getHelixManager(clusterName,
-                                                  instanceName,
-                                                  InstanceType.PARTICIPANT,
-                                                  zkAddress);
-    manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
-    manager.connect();
-    Thread.currentThread.join();
-    }
-
-}
-```
-
-##### Start the controller
-
-Controller can be started either as a separate process or can be embedded within each node process
-
-###### Separate process
-This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes.
-
-```
-./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
-```
-
-###### Embedded within the node process
-This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass
-
-```
-public class LockProcess{
-
-  public static void main(String args){
-    String zkAddress= "localhost:2199";
-    String clusterName = "lock-manager-demo";
-    .
-    .
-    manager.connect();
-    HelixManager controller;
-    controller = HelixControllerMain.startHelixController(zkAddress, 
-                                                          clusterName,
-                                                          "controller", 
-                                                          HelixControllerMain.STANDALONE);
-    Thread.currentThread.join();
-  }
-}
-```
-
-----------------------------------------------------------------------------------------
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/recipes/rabbitmq_consumer_group.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/recipes/rabbitmq_consumer_group.md b/src/site/markdown/recipes/rabbitmq_consumer_group.md
deleted file mode 100644
index 9edc2cb..0000000
--- a/src/site/markdown/recipes/rabbitmq_consumer_group.md
+++ /dev/null
@@ -1,227 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-
-RabbitMQ Consumer Group
-=======================
-
-[RabbitMQ](http://www.rabbitmq.com/) is a well known Open source software the provides robust messaging for applications.
-
-One of the commonly implemented recipes using this software is a work queue.  http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where
-
-* A producer sends a message with a routing key. 
-* The message is routed to the queue whose binding key exactly matches the routing key of the message.	
-* There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
-
-The example provided [here](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes how multiple consumers can be started to process all the messages.
-
-While this works, in production systems one needs the following 
-
-* Ability to handle failures: when a consumers fails another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer.
-* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers. 
-
-In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.
-
-Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition. 
-
-Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. 
-What we want is all 6 queues to be evenly divided among 2 consumers. 
-Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues.
-Now let's say that a consumer failed which reduces the number of active consumers to 2. This means each consumer must process 3 queues.
-
-We showcase how such a dynamic App can be developed using Helix. Even though we use rabbitmq as the pub/sub system one can extend this solution to other pub/sub systems.
-
-Try it
-======
-
-```
-git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-cd incubator-helix
-mvn clean install package -DskipTests
-cd recipes/rabbitmq-consumer-group/bin
-chmod +x *
-export HELIX_PKG_ROOT=`pwd`/helix-core/target/helix-core-pkg
-export HELIX_RABBITMQ_ROOT=`pwd`/recipes/rabbitmq-consumer-group/
-chmod +x $HELIX_PKG_ROOT/bin/*
-chmod +x $HELIX_RABBITMQ_ROOT/bin/*
-```
-
-
-Install Rabbit MQ
-----------------
-
-Setting up RabbitMQ on a local box is straightforward. You can find the instructions here
-http://www.rabbitmq.com/download.html
-
-Start ZK
---------
-Start zookeeper at port 2199
-
-```
-$HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
-```
-
-Setup the consumer group cluster
---------------------------------
-This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues. 
-
-```
-$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199 
-```
-
-Add consumers
--------------
-Start 2 consumers in 2 different terminals. Each consumer is given a unique id.
-
-```
-//start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost 
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost 
-
-```
-
-Start HelixController
---------------------
-Now start a Helix controller that starts managing the "rabbitmq-consumer-group" cluster.
-
-```
-$HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
-```
-
-Send messages to the Topic
---------------------------
-
-Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. 
-Based on the key, messages gets routed to the appropriate queue.
-
-```
-$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
-```
-
-After running this, you should see all 20 messages being processed by 2 consumers. 
-
-Add another consumer
---------------------
-Once a new consumer is started, helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
-each consumer is now processing only 2 queues.
-Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.
-
-```
-$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
-```
-
-Send messages again to the topic.
-
-```
-$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
-```
-
-You should see that messages are now received by all 3 consumers.
-
-Stop a consumer
----------------
-In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.
-
-
-How does it work
-================
-
-Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq). 
- 
-Cluster setup
--------------
-This step creates znode on zookeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
-
-It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to FULL_AUTO. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
-
-```
-      zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
-          ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
-      ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
-      
-      // add cluster
-      admin.addCluster(clusterName, true);
-
-      // add state model definition
-      StateModelConfigGenerator generator = new StateModelConfigGenerator();
-      admin.addStateModelDef(clusterName, "OnlineOffline",
-          new StateModelDefinition(generator.generateConfigForOnlineOffline()));
-
-      // add resource "topic" which has 6 partitions
-      String resourceName = "rabbitmq-consumer-group";
-      admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "FULL_AUTO");
-```
-
-Starting the consumers
-----------------------
-The only thing consumers need to know is the zkaddress, cluster name and consumer id. It does not need to know anything else.
-
-```
-   _manager =
-          HelixManagerFactory.getZKHelixManager(_clusterName,
-                                                _consumerId,
-                                                InstanceType.PARTICIPANT,
-                                                _zkAddr);
-
-      StateMachineEngine stateMach = _manager.getStateMachineEngine();
-      ConsumerStateModelFactory modelFactory =
-          new ConsumerStateModelFactory(_consumerId, _mqServer);
-      stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
-
-      _manager.connect();
-
-```
-
-Once the consumer has registered the statemodel and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. 
-As a part of this transition, the consumer will stop consuming from a that queue.
-
-```
- @Transition(to = "ONLINE", from = "OFFLINE")
-  public void onBecomeOnlineFromOffline(Message message, NotificationContext context)
-  {
-    LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
-
-    if (_thread == null)
-    {
-      LOG.debug("Starting ConsumerThread for " + _partition + "...");
-      _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
-      _thread.start();
-      LOG.debug("Starting ConsumerThread for " + _partition + " done");
-
-    }
-  }
-
-  @Transition(to = "OFFLINE", from = "ONLINE")
-  public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
-      throws InterruptedException
-  {
-    LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
-
-    if (_thread != null)
-    {
-      LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
-
-      _thread.interrupt();
-      _thread.join(2000);
-      _thread = null;
-      LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
-
-    }
-  }
-```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/recipes/rsync_replicated_file_store.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/recipes/rsync_replicated_file_store.md b/src/site/markdown/recipes/rsync_replicated_file_store.md
deleted file mode 100644
index f8a74a0..0000000
--- a/src/site/markdown/recipes/rsync_replicated_file_store.md
+++ /dev/null
@@ -1,165 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Near real time rsync replicated file system
-===========================================
-
-Quickdemo
----------
-
-* This demo starts 3 instances with id's as ```localhost_12001, localhost_12002, localhost_12003```
-* Each instance stores its files under ```/tmp/<id>/filestore```
-* ``` localhost_12001 ``` is designated as the master and ``` localhost_12002 and localhost_12003``` are the slaves.
-* Files written to master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and it gets replicated to other folders.
-* When the master is stopped, ```localhost_12002``` is promoted to master. 
-* The other slave ```localhost_12003``` stops replicating from ```localhost_12001``` and starts replicating from new master ```localhost_12002```
-* Files written to new master ```localhost_12002``` are replicated to ```localhost_12003```
-* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appears in ```/tmp/localhost_12003/filestore```
-* Ignore the interrupted exceptions on the console :-).
-
-
-```
-git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-cd recipes/rsync-replicated-file-system/
-mvn clean install package -DskipTests
-cd target/rsync-replicated-file-system-pkg/bin
-chmod +x *
-./quickdemo
-
-```
-
-Overview
---------
-
-There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these usecases is also that files are typically only added or deleted, rarely updated. When there are updates, they are rare and do not have any concurrency requirements.
-
-These are much simpler requirements than what general purpose distributed file system have to satisfy including concurrent access to files, random access for reads and updates, posix compliance etc. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
- 
-A different implementation of a distributed file system includes HDFS which is inspired by Google's GFS. This is one of the most widely used distributed file system that forms the main data storage platform for Hadoop. HDFS is primary aimed at processing very large data sets and distributes files across a cluster of commodity servers by splitting up files in fixed size chunks. HDFS is not particularly well suited for storing a very large number of relatively tiny files.
-
-### File Store
-
-It's possible to build a vastly simpler system for the class of applications that have simpler requirements as we have pointed out.
-
-* Large number of files but each file is relatively small.
-* Access is limited to create, delete and get entire files.
-* No updates to files that are already created (or it's feasible to delete the old file and create a new one).
- 
-
-We call this system a Partitioned File Store (PFS) to distinguish it from other distributed file systems. This system needs to provide the following features:
-
-* CRD access to large number of small files
-* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement.
-* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability.
-* Elasticity: It should be possible to add capacity to the cluster easily.
- 
-
-Apache Helix is a generic cluster management framework that makes it very easy to provide the scalability, fault-tolerance and elasticity features. 
-Rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
-
-Design
-------
-
-High level 
-
-* Partition the file system based on the file name. 
-* At any time a single writer can write, we call this a master.
-* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads.
-* Slave replicates data from the master.
-* When a master fails, slave gets promoted to master.
-
-### Transaction log
-
-Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order. 
-To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit id in which the 32 LSB represents a sequence number and MSB represents the generation number.
-Sequence gets incremented on every transaction and and generation is increment when a new master is elected. 
-
-### Replication
-
-Replication is required to slave to keep up with the changes on the master. Every time the slave applies a change it checkpoints the last applied transaction id. 
-During restarts, this allows the slave to pull changes from the last checkpointed id. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction id, it uses the same id generated by the master.
-
-
-### Fail over
-
-When a master fails, a new slave will be promoted to master. If the prev master node is reachable, then the new master will flush all the 
-changes from previous master before taking up mastership. The new master will record the end transaction id of the current generation and then starts new generation 
-with sequence starting from 1. After this the master will begin accepting writes. 
-
-
-![Partitioned File Store](../images/PFS-Generic.png)
-
-
-
-Rsync based solution
--------------------
-
-![Rsync based File Store](../images/RSYNC_BASED_PFS.png)
-
-
-This application demonstrate a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, 
-can implement a custom solution to notify the slave of the changes and also provide an api to pull the change files.
-#### Concept
-* file_store_dir: Root directory for the actual data files 
-* change_log_dir: The transaction logs are generated under this folder.
-* check_point_dir: The slave stores the check points ( last processed transaction) here.
-
-#### Master
-* File server: This component support file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. Idea is that most applications have different ways of implementing this component and has some business logic associated with it. It is not hard to come up with such a component if needed.
-* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes.
-* Change Log Generator: This registers as a listener of File System Watcher and on each notification logs the changes into a file under ```change_log_dir```. 
-
-####Slave
-* File server: This component on the slave will only support reads.
-* Cluster state observer: Slave observes the cluster state and is able to know who is the current master. 
-* Replicator: This has two subcomponents
-    - Periodic rsync of change log: This is a background process that periodically rsyncs the ```change_log_dir``` of the master to its local directory
-    - Change Log Watcher: This watches the ```change_log_dir``` for changes and notifies the registered listeners of the change
-    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file.
-
-
-#### Coordination
-
-The coordination between nodes is done by Helix. Helix does the partition management and assigns the partition to multiple nodes based on the replication factor. It elects one the nodes as master and designates others as slaves.
-It provides notifications to each node in the form of state transitions ( Offline to Slave, Slave to Master). It also provides notification when there is change is cluster state. 
-This allows the slave to stop replicating from current master and start replicating from new master. 
-
-In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically 
-re-distribute partitions among the nodes. To summarize, Helix provides partition management, fault tolerance and facilitates automated cluster expansion.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/recipes/service_discovery.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/recipes/service_discovery.md b/src/site/markdown/recipes/service_discovery.md
deleted file mode 100644
index 8e06ead..0000000
--- a/src/site/markdown/recipes/service_discovery.md
+++ /dev/null
@@ -1,191 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-Service Discovery
------------------
-
-One of the common usage of zookeeper is enable service discovery. 
-The basic idea is that when a server starts up it advertises its configuration/metadata such as host name port etc on zookeeper. 
-This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and 
-is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to vips.
-
-The core idea behind this is to use zookeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a znode. 
-When the server shutdowns, zookeeper automatically removes this znode. 
-
-There are two ways the clients can dynamically discover the active servers
-
-#### ZOOKEEPER WATCH
-
-Clients can set a child watch under specific path on zookeeper. 
-When a new service is registered/deregistered, zookeeper notifies the client via watchevent and the client can read the list of services. Even though this looks trivial, 
-there are lot of things one needs to keep in mind like ensuring that you first set the watch back on zookeeper before reading data from zookeeper.
-
-
-#### POLL
-
-Another approach is for the client to periodically read the zookeeper path and get the list of services.
-
-
-Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is worst especially when servers are starting up. 
-But good thing about setting watch is that clients are immediately notified of a change which is not true in case of polling. 
-In some cases, having both WATCH and POLL makes sense, WATCH allows one to get notifications as soon as possible while POLL provides a safety net if a watch event is missed because of code bug or zookeeper fails to notify.
-
-##### Other important scenarios to take care of
-* What happens when zookeeper session expires. All the watches/ephemeral nodes previously added/created by this server are lost. 
-One needs to add the watches again , recreate the ephemeral nodes etc.
-* Due to network issues or java GC pauses session expiry might happen again and again also known as flapping. Its important for the server to detect this and deregister itself.
-
-##### Other operational things to consider
-* What if the node is behaving badly, one might kill the server but will lose the ability to debug. 
-It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
- 
-#### Configuration ownership
-
-This is an important aspect that is often ignored in the initial stages of your development. In common, service discovery pattern means that servers start up with some configuration and then simply puts its configuration/metadata in zookeeper. While this works well in the beginning, 
-configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting of the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server. 
-
-Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from zookeeper.
-
-h3. How to use Helix to achieve this
-
-Even though Helix has higher level abstraction in terms of statemachine, constraints and objectives, 
-service discovery is one of things that existed since we started. 
-The controller uses the exact mechanism we described above to discover when new servers join the cluster.
-We create these znodes under /CLUSTERNAME/LIVEINSTANCES. 
-Since at any time there is only one controller, we use ZK watch to track the liveness of a server.
-
-This recipe, simply demonstrate how one can re-use that part for implementing service discovery. This demonstrates multiple MODE's of service discovery
-
-* POLL: The client reads from zookeeper at regular intervals 30 seconds. Use this if you have 100's of clients
-* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients.
-* NONE: This does neither of the above, but reads directly from zookeeper when ever needed.
-
-Helix provides these additional features compared to other implementations available else where
-
-* It has the concept of disabling a node which means that a badly behaving node, can be disabled using helix admin api.
-* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node.
-* Configuration management  
-    * Allows one to set configuration via admin api at various granulaties like cluster, instance, resource, partition 
-    * Configuration can be dynamically changed.
-    * Notifies the server when configuration changes.
-
-
-##### checkout and build
-
-```
-git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-cd incubator-helix
-mvn clean install package -DskipTests
-cd recipes/service-discovery/target/service-discovery-pkg/bin
-chmod +x *
-```
-
-##### start zookeeper
-
-```
-./start-standalone-zookeeper 2199
-```
-
-#### Run the demo
-
-```
-./service-discovery-demo.sh
-```
-
-#### Output
-
-```
-START:Service discovery demo mode:WATCH
-	Registering service
-		host.x.y.z_12000
-		host.x.y.z_12001
-		host.x.y.z_12002
-		host.x.y.z_12003
-		host.x.y.z_12004
-	SERVICES AVAILABLE
-		SERVICENAME 	HOST 			PORT
-		myServiceName 	host.x.y.z 		12000
-		myServiceName 	host.x.y.z 		12001
-		myServiceName 	host.x.y.z 		12002
-		myServiceName 	host.x.y.z 		12003
-		myServiceName 	host.x.y.z 		12004
-	Deregistering service:
-		host.x.y.z_12002
-	SERVICES AVAILABLE
-		SERVICENAME 	HOST 			PORT
-		myServiceName 	host.x.y.z 		12000
-		myServiceName 	host.x.y.z 		12001
-		myServiceName 	host.x.y.z 		12003
-		myServiceName 	host.x.y.z 		12004
-	Registering service:host.x.y.z_12002
-END:Service discovery demo mode:WATCH
-=============================================
-START:Service discovery demo mode:POLL
-	Registering service
-		host.x.y.z_12000
-		host.x.y.z_12001
-		host.x.y.z_12002
-		host.x.y.z_12003
-		host.x.y.z_12004
-	SERVICES AVAILABLE
-		SERVICENAME 	HOST 			PORT
-		myServiceName 	host.x.y.z 		12000
-		myServiceName 	host.x.y.z 		12001
-		myServiceName 	host.x.y.z 		12002
-		myServiceName 	host.x.y.z 		12003
-		myServiceName 	host.x.y.z 		12004
-	Deregistering service:
-		host.x.y.z_12002
-	Sleeping for poll interval:30000
-	SERVICES AVAILABLE
-		SERVICENAME 	HOST 			PORT
-		myServiceName 	host.x.y.z 		12000
-		myServiceName 	host.x.y.z 		12001
-		myServiceName 	host.x.y.z 		12003
-		myServiceName 	host.x.y.z 		12004
-	Registering service:host.x.y.z_12002
-END:Service discovery demo mode:POLL
-=============================================
-START:Service discovery demo mode:NONE
-	Registering service
-		host.x.y.z_12000
-		host.x.y.z_12001
-		host.x.y.z_12002
-		host.x.y.z_12003
-		host.x.y.z_12004
-	SERVICES AVAILABLE
-		SERVICENAME 	HOST 			PORT
-		myServiceName 	host.x.y.z 		12000
-		myServiceName 	host.x.y.z 		12001
-		myServiceName 	host.x.y.z 		12002
-		myServiceName 	host.x.y.z 		12003
-		myServiceName 	host.x.y.z 		12004
-	Deregistering service:
-		host.x.y.z_12000
-	SERVICES AVAILABLE
-		SERVICENAME 	HOST 			PORT
-		myServiceName 	host.x.y.z 		12001
-		myServiceName 	host.x.y.z 		12002
-		myServiceName 	host.x.y.z 		12003
-		myServiceName 	host.x.y.z 		12004
-	Registering service:host.x.y.z_12000
-END:Service discovery demo mode:NONE
-=============================================
-
-```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/recipes/task_dag_execution.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/recipes/task_dag_execution.md b/src/site/markdown/recipes/task_dag_execution.md
deleted file mode 100644
index f0474e4..0000000
--- a/src/site/markdown/recipes/task_dag_execution.md
+++ /dev/null
@@ -1,204 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Distributed task execution
-
-
-This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with desired parallelism and will start only when up-stream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of 
-re-assigning a failed task partition to a new worker. 
-
-Redis is used as a result store. Any other suitable implementation for TaskResultStore can be plugged in.
-
-### Workflow 
-
-
-#### Input 
-
-10000 impression events and around 100 click events are pre-populated in task result store (redis). 
-
-* **ImpEvent**: format: id,isFraudulent,country,gender
-
-* **ClickEvent**: format: id,isFraudulent,impEventId
-
-#### Stages
-
-+ **FilterImps**: Filters impression where isFraudulent=true.
-
-+ **FilterClicks**: Filters clicks where isFraudulent=true
-
-+ **impCountsByGender**: Generates impression counts grouped by gender. It does this by incrementing the count for 'impression_gender_counts:<gender_value>' in the task result store (redis hash). Depends on: **FilterImps**
-
-+ **impCountsByCountry**: Generates impression counts grouped by country. It does this by incrementing the count for 'impression_country_counts:<country_value>' in the task result store (redis hash). Depends on: **FilterClicks**
-
-+ **impClickJoin**: Joins clicks with corresponding impression event using impEventId as the join key. Join is needed to pull dimensions not present in click event. Depends on: **FilterImps, FilterClicks**
-
-+ **clickCountsByGender**: Generates click counts grouped by gender. It does this by incrementing the count for click_gender_counts:<gender_value> in the task result store (redis hash). Depends on: **impClickJoin**
-
-+ **clickCountsByGender**: Generates click counts grouped by country. It does this by incrementing the count for click_country_counts:<country_value> in the task result store (redis hash). Depends on: **impClickJoin**
-
-+ **report**: Reads from all aggregates generated by previous stages and prints them. Depends on: **impCountsByGender, impCountsByCountry, clickCountsByGender,clickCountsByGender**
-
-
-### Creating DAG
-
-Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modelled as a resource in Helix using OnlineOffline state model. As part of Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to online state. See Task.java for additional info.
-
-```
-
-  Dag dag = new Dag();
-  dag.addNode(new Node("filterImps", 10, ""));
-  dag.addNode(new Node("filterClicks", 5, ""));
-  dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
-  dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
-  dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
-  dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
-  dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));		
-  dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
-
-
-```
-
-### DEMO
-
-In order to run the demo, use the following steps
-
-See http://redis.io/topics/quickstart on how to install redis server
-
-```
-
-Start redis e.g:
-./redis-server --port 6379
-
-git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-cd recipes/task-execution
-mvn clean install package -DskipTests
-cd target/task-execution-pkg/bin
-chmod +x task-execution-demo.sh
-./task-execution-demo.sh 2181 localhost 6379 
-
-```
-
-```
-
-
-
-
-
-                       +-----------------+       +----------------+
-                       |   filterImps    |       |  filterClicks  |
-                       | (parallelism=10)|       | (parallelism=5)|
-                       +----------+-----++       +-------+--------+
-                       |          |     |                |
-                       |          |     |                |
-                       |          |     |                |
-                       |          |     +------->--------v------------+
-      +--------------<-+   +------v-------+    |  impClickJoin        |
-      |impCountsByGender   |impCountsByCountry | (parallelism=10)     |
-      |(parallelism=10)    |(parallelism=10)   ++-------------------+-+
-      +-----------+--+     +---+----------+     |                   |
-                  |            |                |                   |
-                  |            |                |                   |
-                  |            |       +--------v---------+       +-v-------------------+
-                  |            |       |clickCountsByGender       |clickCountsByCountry |
-                  |            |       |(parallelism=5)   |       |(parallelism=5)      |
-                  |            |       +----+-------------+       +---------------------+
-                  |            |            |                     |
-                  |            |            |                     |
-                  |            |            |                     |
-                  +----->+-----+>-----------v----+<---------------+
-                         | report                |
-                         |(parallelism=1)        |
-                         +-----------------------+
-
-```
-
-(credit for above ascii art: http://www.asciiflow.com)
-
-### OUTPUT
-
-```
-Done populating dummy data
-Executing filter task for filterImps_3 for impressions_demo
-Executing filter task for filterImps_2 for impressions_demo
-Executing filter task for filterImps_0 for impressions_demo
-Executing filter task for filterImps_1 for impressions_demo
-Executing filter task for filterImps_4 for impressions_demo
-Executing filter task for filterClicks_3 for clicks_demo
-Executing filter task for filterClicks_1 for clicks_demo
-Executing filter task for filterImps_8 for impressions_demo
-Executing filter task for filterImps_6 for impressions_demo
-Executing filter task for filterClicks_2 for clicks_demo
-Executing filter task for filterClicks_0 for clicks_demo
-Executing filter task for filterImps_7 for impressions_demo
-Executing filter task for filterImps_5 for impressions_demo
-Executing filter task for filterClicks_4 for clicks_demo
-Executing filter task for filterImps_9 for impressions_demo
-Running AggTask for impCountsByGender_3 for filtered_impressions_demo gender
-Running AggTask for impCountsByGender_2 for filtered_impressions_demo gender
-Running AggTask for impCountsByGender_0 for filtered_impressions_demo gender
-Running AggTask for impCountsByGender_9 for filtered_impressions_demo gender
-Running AggTask for impCountsByGender_1 for filtered_impressions_demo gender
-Running AggTask for impCountsByGender_4 for filtered_impressions_demo gender
-Running AggTask for impCountsByCountry_4 for filtered_impressions_demo country
-Running AggTask for impCountsByGender_5 for filtered_impressions_demo gender
-Executing JoinTask for impClickJoin_2
-Running AggTask for impCountsByCountry_3 for filtered_impressions_demo country
-Running AggTask for impCountsByCountry_1 for filtered_impressions_demo country
-Running AggTask for impCountsByCountry_0 for filtered_impressions_demo country
-Running AggTask for impCountsByCountry_2 for filtered_impressions_demo country
-Running AggTask for impCountsByGender_6 for filtered_impressions_demo gender
-Executing JoinTask for impClickJoin_1
-Executing JoinTask for impClickJoin_0
-Executing JoinTask for impClickJoin_3
-Running AggTask for impCountsByGender_8 for filtered_impressions_demo gender
-Executing JoinTask for impClickJoin_4
-Running AggTask for impCountsByGender_7 for filtered_impressions_demo gender
-Running AggTask for impCountsByCountry_5 for filtered_impressions_demo country
-Running AggTask for impCountsByCountry_6 for filtered_impressions_demo country
-Executing JoinTask for impClickJoin_9
-Running AggTask for impCountsByCountry_8 for filtered_impressions_demo country
-Running AggTask for impCountsByCountry_7 for filtered_impressions_demo country
-Executing JoinTask for impClickJoin_5
-Executing JoinTask for impClickJoin_6
-Running AggTask for impCountsByCountry_9 for filtered_impressions_demo country
-Executing JoinTask for impClickJoin_8
-Executing JoinTask for impClickJoin_7
-Running AggTask for clickCountsByCountry_1 for joined_clicks_demo country
-Running AggTask for clickCountsByCountry_0 for joined_clicks_demo country
-Running AggTask for clickCountsByCountry_2 for joined_clicks_demo country
-Running AggTask for clickCountsByCountry_3 for joined_clicks_demo country
-Running AggTask for clickCountsByGender_1 for joined_clicks_demo gender
-Running AggTask for clickCountsByCountry_4 for joined_clicks_demo country
-Running AggTask for clickCountsByGender_3 for joined_clicks_demo gender
-Running AggTask for clickCountsByGender_2 for joined_clicks_demo gender
-Running AggTask for clickCountsByGender_4 for joined_clicks_demo gender
-Running AggTask for clickCountsByGender_0 for joined_clicks_demo gender
-Running reports task
-Impression counts per country
-{CANADA=1940, US=1958, CHINA=2014, UNKNOWN=2022, UK=1946}
-Click counts per country
-{US=24, CANADA=14, CHINA=26, UNKNOWN=14, UK=22}
-Impression counts per gender
-{F=3325, UNKNOWN=3259, M=3296}
-Click counts per gender
-{F=33, UNKNOWN=32, M=35}
-
-
-```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/recipes/user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/recipes/user_def_rebalancer.md b/src/site/markdown/recipes/user_def_rebalancer.md
deleted file mode 100644
index 8beac0a..0000000
--- a/src/site/markdown/recipes/user_def_rebalancer.md
+++ /dev/null
@@ -1,287 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-Lock Manager with a User-Defined Rebalancer
--------------------------------------------
-Helix is able to compute node preferences and state assignments automatically using general-purpose algorithms. In many cases, a distributed system implementer may choose to instead define a customized approach to computing the location of replicas, the state mapping, or both in response to the addition or removal of participants. The following is an implementation of the [Distributed Lock Manager](./lock_manager.html) that includes a user-defined rebalancer.
-
-### Define the cluster and locks
-
-The YAML file below fully defines the cluster and the locks. A lock can be in one of two states: locked and unlocked. Transitions can happen in either direction, and the locked is preferred. A resource in this example is the entire collection of locks to distribute. A partition is mapped to a lock; in this case that means there are 12 locks. These 12 locks will be distributed across 3 nodes. The constraints indicate that only one replica of a lock can be in the locked state at any given time. These locks can each only have a single holder, defined by a replica count of 1.
-
-Notice the rebalancer section of the definition. The mode is set to USER_DEFINED and the class name refers to the plugged-in rebalancer implementation. This implementation is called whenever the state of the cluster changes, as is the case when participants are added or removed from the system.
-
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/resources/lock-manager-config.yaml
-
-```
-clusterName: lock-manager-custom-rebalancer # unique name for the cluster
-resources:
-  - name: lock-group # unique resource name
-    rebalancer: # we will provide our own rebalancer
-      mode: USER_DEFINED
-      class: org.apache.helix.userrebalancedlocks.LockManagerRebalancer
-    partitions:
-      count: 12 # number of locks
-      replicas: 1 # number of simultaneous holders for each lock
-    stateModel:
-      name: lock-unlock # unique model name
-      states: [LOCKED, RELEASED, DROPPED] # the list of possible states
-      transitions: # the list of possible transitions
-        - name: Unlock
-          from: LOCKED
-          to: RELEASED
-        - name: Lock
-          from: RELEASED
-          to: LOCKED
-        - name: DropLock
-          from: LOCKED
-          to: DROPPED
-        - name: DropUnlock
-          from: RELEASED
-          to: DROPPED
-        - name: Undrop
-          from: DROPPED
-          to: RELEASED
-      initialState: RELEASED
-    constraints:
-      state:
-        counts: # maximum number of replicas of a partition that can be in each state
-          - name: LOCKED
-            count: "1"
-          - name: RELEASED
-            count: "-1"
-          - name: DROPPED
-            count: "-1"
-        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority
-      transition: # transitions priority to enforce order that transitions occur
-        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock]
-participants: # list of nodes that can acquire locks
-  - name: localhost_12001
-    host: localhost
-    port: 12001
-  - name: localhost_12002
-    host: localhost
-    port: 12002
-  - name: localhost_12003
-    host: localhost
-    port: 12003
-```
-
-Then, Helix\'s YAMLClusterSetup tool can read in the configuration and bootstrap the cluster immediately:
-
-```
-YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
-InputStream input =
-    Thread.currentThread().getContextClassLoader()
-        .getResourceAsStream("lock-manager-config.yaml");
-YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
-```
-
-### Write a rebalancer
-Below is a full implementation of a rebalancer. In this case, it simply throws out the previous ideal state, computes the target node for as many partition replicas as can hold a lock in the LOCKED state (in this example, one), and assigns them the LOCKED state (which is at the head of the state preference list). Clearly a more robust implementation would likely examine the current ideal state to maintain current assignments, and the full state list to handle models more complicated than this one. However, for a simple lock holder implementation, this is sufficient.
-
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockManagerRebalancer.java
-
-```
-public class LockManagerRebalancer implements Rebalancer {
-  @Override
-  public void init(HelixManager manager) {
-    // do nothing; this rebalancer is independent of the manager
-  }
-
-  @Override
-  public ResourceAssignment computeResourceMapping(Resource resource, IdealState currentIdealState,
-      CurrentStateOutput currentStateOutput, ClusterDataCache clusterData) {
-    // Initialize an empty mapping of locks to participants
-    ResourceAssignment assignment = new ResourceAssignment(resource.getResourceName());
-
-    // Get the list of live participants in the cluster
-    List<String> liveParticipants = new ArrayList<String>(clusterData.getLiveInstances().keySet());
-
-    // Get the state model (should be a simple lock/unlock model) and the highest-priority state
-    String stateModelName = currentIdealState.getStateModelDefRef();
-    StateModelDefinition stateModelDef = clusterData.getStateModelDef(stateModelName);
-    if (stateModelDef.getStatesPriorityList().size() < 1) {
-      LOG.error("Invalid state model definition. There should be at least one state.");
-      return assignment;
-    }
-    String lockState = stateModelDef.getStatesPriorityList().get(0);
-
-    // Count the number of participants allowed to lock each lock
-    String stateCount = stateModelDef.getNumInstancesPerState(lockState);
-    int lockHolders = 0;
-    try {
-      // a numeric value is a custom-specified number of participants allowed to lock the lock
-      lockHolders = Integer.parseInt(stateCount);
-    } catch (NumberFormatException e) {
-      LOG.error("Invalid state model definition. The lock state does not have a valid count");
-      return assignment;
-    }
-
-    // Fairly assign the lock state to the participants using a simple mod-based sequential
-    // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
-    // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
-    // number of participants as necessary.
-    // This assumes a simple lock-unlock model where the only state of interest is which nodes have
-    // acquired each lock.
-    int i = 0;
-    for (Partition partition : resource.getPartitions()) {
-      Map<String, String> replicaMap = new HashMap<String, String>();
-      for (int j = i; j < i + lockHolders; j++) {
-        int participantIndex = j % liveParticipants.size();
-        String participant = liveParticipants.get(participantIndex);
-        // enforce that a participant can only have one instance of a given lock
-        if (!replicaMap.containsKey(participant)) {
-          replicaMap.put(participant, lockState);
-        }
-      }
-      assignment.addReplicaMap(partition, replicaMap);
-      i++;
-    }
-    return assignment;
-  }
-}
-```
-
-### Start up the participants
-Here is a lock class based on the newly defined lock-unlock state model so that the participant can receive callbacks on state transitions.
-
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/Lock.java
-
-```
-public class Lock extends StateModel {
-  private String lockName;
-
-  public Lock(String lockName) {
-    this.lockName = lockName;
-  }
-
-  @Transition(from = "RELEASED", to = "LOCKED")
-  public void lock(Message m, NotificationContext context) {
-    System.out.println(context.getManager().getInstanceName() + " acquired lock:" + lockName);
-  }
-
-  @Transition(from = "LOCKED", to = "RELEASED")
-  public void release(Message m, NotificationContext context) {
-    System.out.println(context.getManager().getInstanceName() + " releasing lock:" + lockName);
-  }
-}
-```
-
-Here is the factory to make the Lock class accessible.
-
-Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockFactory.java
-
-```
-public class LockFactory extends StateModelFactory<Lock> {
-  @Override
-  public Lock createNewStateModel(String lockName) {
-    return new Lock(lockName);
-  }
-}
-```
-
-Finally, here is the factory registration and the start of the participant:
-
-```
-participantManager =
-    HelixManagerFactory.getZKHelixManager(clusterName, participantName, InstanceType.PARTICIPANT,
-        zkAddress);
-participantManager.getStateMachineEngine().registerStateModelFactory(stateModelName,
-    new LockFactory());
-participantManager.connect();
-```
-
-### Start up the controller
-
-```
-controllerManager =
-    HelixControllerMain.startHelixController(zkAddress, config.clusterName, "controller",
-        HelixControllerMain.STANDALONE);
-```
-
-### Try it out
-#### Building 
-```
-git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-cd incubator-helix
-mvn clean install package -DskipTests
-cd recipes/user-rebalanced-lock-manager/target/user-rebalanced-lock-manager-pkg/bin
-chmod +x *
-./lock-manager-demo.sh
-```
-
-#### Output
-
-```
-./lock-manager-demo 
-STARTING localhost_12002
-STARTING localhost_12001
-STARTING localhost_12003
-STARTED localhost_12001
-STARTED localhost_12003
-STARTED localhost_12002
-localhost_12003 acquired lock:lock-group_4
-localhost_12002 acquired lock:lock-group_8
-localhost_12001 acquired lock:lock-group_10
-localhost_12001 acquired lock:lock-group_3
-localhost_12001 acquired lock:lock-group_6
-localhost_12003 acquired lock:lock-group_0
-localhost_12002 acquired lock:lock-group_5
-localhost_12001 acquired lock:lock-group_9
-localhost_12002 acquired lock:lock-group_2
-localhost_12003 acquired lock:lock-group_7
-localhost_12003 acquired lock:lock-group_11
-localhost_12002 acquired lock:lock-group_1
-lockName  acquired By
-======================================
-lock-group_0  localhost_12003
-lock-group_1  localhost_12002
-lock-group_10 localhost_12001
-lock-group_11 localhost_12003
-lock-group_2  localhost_12002
-lock-group_3  localhost_12001
-lock-group_4  localhost_12003
-lock-group_5  localhost_12002
-lock-group_6  localhost_12001
-lock-group_7  localhost_12003
-lock-group_8  localhost_12002
-lock-group_9  localhost_12001
-Stopping the first participant
-localhost_12001 Interrupted
-localhost_12002 acquired lock:lock-group_3
-localhost_12003 acquired lock:lock-group_6
-localhost_12003 acquired lock:lock-group_10
-localhost_12002 acquired lock:lock-group_9
-lockName  acquired By
-======================================
-lock-group_0  localhost_12003
-lock-group_1  localhost_12002
-lock-group_10 localhost_12003
-lock-group_11 localhost_12003
-lock-group_2  localhost_12002
-lock-group_3  localhost_12002
-lock-group_4  localhost_12003
-lock-group_5  localhost_12002
-lock-group_6  localhost_12003
-lock-group_7  localhost_12003
-lock-group_8  localhost_12002
-lock-group_9  localhost_12002
-```
-
-Notice that the lock assignment directly follows the assignment generated by the user-defined rebalancer both initially and after a participant is removed from the system.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_admin.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_admin.md b/src/site/markdown/tutorial_admin.md
deleted file mode 100644
index 11ab9df..0000000
--- a/src/site/markdown/tutorial_admin.md
+++ /dev/null
@@ -1,407 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Admin Operations</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Admin Operations
-
-Helix provides a set of admin api for cluster management operations. They are supported via:
-
-* _Java API_
-* _Commandline interface_
-* _REST interface via helix-admin-webapp_
-
-### Java API
-See interface [_org.apache.helix.HelixAdmin_](./apidocs/reference/org/apache/helix/HelixAdmin.html)
-
-### Command-line interface
-The command-line tool comes with helix-core package:
-
-Get the command-line tool:
-
-``` 
-  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-  - cd incubator-helix
-  - ./build
-  - cd helix-core/target/helix-core-pkg/bin
-  - chmod +x *.sh
-```
-
-Get help:
-
-```
-  - ./helix-admin.sh --help
-```
-
-All other commands have this form:
-
-```
-  ./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
-```
-
-Admin commands and brief description:
-
-| Command syntax | Description |
-| -------------- | ----------- |
-| _\-\-activateCluster \<clusterName controllerCluster true/false\>_ | Enable/disable a cluster in distributed controller mode |
-| _\-\-addCluster \<clusterName\>_ | Add a new cluster |
-| _\-\-addIdealState \<clusterName resourceName fileName.json\>_ | Add an ideal state to a cluster |
-| _\-\-addInstanceTag \<clusterName instanceName tag\>_ | Add a tag to an instance |
-| _\-\-addNode \<clusterName instanceId\>_ | Add an instance to a cluster |
-| _\-\-addResource \<clusterName resourceName partitionNumber stateModelName\>_ | Add a new resource to a cluster |
-| _\-\-addResourceProperty \<clusterName resourceName propertyName propertyValue\>_ | Add a resource property |
-| _\-\-addStateModelDef \<clusterName fileName.json\>_ | Add a State model definition to a cluster |
-| _\-\-dropCluster \<clusterName\>_ | Delete a cluster |
-| _\-\-dropNode \<clusterName instanceId\>_ | Remove a node from a cluster |
-| _\-\-dropResource \<clusterName resourceName\>_ | Remove an existing resource from a cluster |
-| _\-\-enableCluster \<clusterName true/false\>_ | Enable/disable a cluster |
-| _\-\-enableInstance \<clusterName instanceId true/false\>_ | Enable/disable an instance |
-| _\-\-enablePartition \<true/false clusterName nodeId resourceName partitionName\>_ | Enable/disable a partition |
-| _\-\-getConfig \<configScope configScopeArgs configKeys\>_ | Get user configs |
-| _\-\-getConstraints \<clusterName constraintType\>_ | Get constraints |
-| _\-\-help_ | print help information |
-| _\-\-instanceGroupTag \<instanceTag\>_ | Specify instance group tag, used with rebalance command |
-| _\-\-listClusterInfo \<clusterName\>_ | Show information of a cluster |
-| _\-\-listClusters_ | List all clusters |
-| _\-\-listInstanceInfo \<clusterName instanceId\>_ | Show information of an instance |
-| _\-\-listInstances \<clusterName\>_ | List all instances in a cluster |
-| _\-\-listPartitionInfo \<clusterName resourceName partitionName\>_ | Show information of a partition |
-| _\-\-listResourceInfo \<clusterName resourceName\>_ | Show information of a resource |
-| _\-\-listResources \<clusterName\>_ | List all resources in a cluster |
-| _\-\-listStateModel \<clusterName stateModelName\>_ | Show information of a state model |
-| _\-\-listStateModels \<clusterName\>_ | List all state models in a cluster |
-| _\-\-maxPartitionsPerNode \<maxPartitionsPerNode\>_ | Specify the max partitions per instance, used with addResourceGroup command |
-| _\-\-rebalance \<clusterName resourceName replicas\>_ | Rebalance a resource |
-| _\-\-removeConfig \<configScope configScopeArgs configKeys\>_ | Remove user configs |
-| _\-\-removeConstraint \<clusterName constraintType constraintId\>_ | Remove a constraint |
-| _\-\-removeInstanceTag \<clusterName instanceId tag\>_ | Remove a tag from an instance |
-| _\-\-removeResourceProperty \<clusterName resourceName propertyName\>_ | Remove a resource property |
-| _\-\-resetInstance \<clusterName instanceId\>_ | Reset all erroneous partitions on an instance |
-| _\-\-resetPartition \<clusterName instanceId resourceName partitionName\>_ | Reset an erroneous partition |
-| _\-\-resetResource \<clusterName resourceName\>_ | Reset all erroneous partitions of a resource |
-| _\-\-setConfig \<configScope configScopeArgs configKeyValueMap\>_ | Set user configs |
-| _\-\-setConstraint \<clusterName constraintType constraintId constraintKeyValueMap\>_ | Set a constraint |
-| _\-\-swapInstance \<clusterName oldInstance newInstance\>_ | Swap an old instance with a new instance |
-| _\-\-zkSvr \<ZookeeperServerAddress\>_ | Provide zookeeper address |
-
-### REST interface
-
-The REST interface comes wit helix-admin-webapp package:
-
-``` 
-  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-  - cd incubator-helix 
-  - ./build
-  - cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
-  - chmod +x *.sh
-  - ./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure zookeeper is running
-```
-
-#### URL and support methods
-
-* _/clusters_
-    * List all clusters
-
-    ```
-      curl http://localhost:8100/clusters
-    ```
-
-    * Add a cluster
-    
-    ```
-      curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
-    ```
-
-* _/clusters/{clusterName}_
-    * List cluster information
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster
-    ```
-
-    * Enable/disable a cluster in distributed controller mode
-    
-    ```
-      curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
-    ```
-
-    * Remove a cluster
-    
-    ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster
-    ```
-    
-* _/clusters/{clusterName}/resourceGroups_
-    * List all resources in a cluster
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups
-    ```
-    
-    * Add a resource to cluster
-    
-    ```
-      curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
-    ```
-
-* _/clusters/{clusterName}/resourceGroups/{resourceName}_
-    * List resource information
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
-    ```
-    
-    * Drop a resource
-    
-    ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
-    ```
-
-    * Reset all erroneous partitions of a resource
-    
-    ```
-      curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
-    ```
-
-* _/clusters/{clusterName}/resourceGroups/{resourceName}/idealState_
-    * Rebalance a resource
-    
-    ```
-      curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
-    ```
-
-    * Add an ideal state
-    
-    ```
-    echo jsonParameters={
-    "command":"addIdealState"
-       }&newIdealState={
-      "id" : "MyDB",
-      "simpleFields" : {
-        "IDEAL_STATE_MODE" : "AUTO",
-        "NUM_PARTITIONS" : "8",
-        "REBALANCE_MODE" : "SEMI_AUTO",
-        "REPLICAS" : "0",
-        "STATE_MODEL_DEF_REF" : "MasterSlave",
-        "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
-      },
-      "listFields" : {
-      },
-      "mapFields" : {
-        "MyDB_0" : {
-          "localhost_1001" : "MASTER",
-          "localhost_1002" : "SLAVE"
-        }
-      }
-    }
-    > newIdealState.json
-    curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
-    ```
-    
-    * Add resource property
-    
-    ```
-      curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
-    ```
-    
-* _/clusters/{clusterName}/resourceGroups/{resourceName}/externalView_
-    * Show resource external view
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
-    ```
-* _/clusters/{clusterName}/instances_
-    * List all instances
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/instances
-    ```
-
-    * Add an instance
-    
-    ```
-    curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
-    ```
-    
-    * Swap an instance
-    
-    ```
-      curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
-    ```
-* _/clusters/{clusterName}/instances/{instanceName}_
-    * Show instance information
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
-    ```
-    
-    * Enable/disable an instance
-    
-    ```
-      curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
-    ```
-
-    * Drop an instance
-    
-    ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
-    ```
-    
-    * Disable/enable partitions on an instance
-    
-    ```
-      curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
-    ```
-    
-    * Reset an erroneous partition on an instance
-    
-    ```
-      curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
-    ```
-
-    * Reset all erroneous partitions on an instance
-    
-    ```
-      curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
-    ```
-
-* _/clusters/{clusterName}/configs_
-    * Get user cluster level config
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/configs/cluster
-    ```
-    
-    * Set user cluster level config
-    
-    ```
-      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
-    ```
-
-    * Remove user cluster level config
-    
-    ```
-    curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
-    ```
-    
-    * Get/set/remove user participant level config
-    
-    ```
-      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
-    ```
-    
-    * Get/set/remove resource level config
-    
-    ```
-    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
-    ```
-
-* _/clusters/{clusterName}/controller_
-    * Show controller information
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/Controller
-    ```
-    
-    * Enable/disable cluster
-    
-    ```
-      curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
-    ```
-
-* _/zkPath/{path}_
-    * Get information for zookeeper path
-    
-    ```
-      curl http://localhost:8100/zkPath/MyCluster
-    ```
-
-* _/clusters/{clusterName}/StateModelDefs_
-    * Show all state model definitions
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/StateModelDefs
-    ```
-
-    * Add a state mdoel definition
-    
-    ```
-      echo jsonParameters={
-        "command":"addStateModelDef"
-       }&newStateModelDef={
-          "id" : "OnlineOffline",
-          "simpleFields" : {
-            "INITIAL_STATE" : "OFFLINE"
-          },
-          "listFields" : {
-            "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
-            "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
-          },
-          "mapFields" : {
-            "DROPPED.meta" : {
-              "count" : "-1"
-            },
-            "OFFLINE.meta" : {
-              "count" : "-1"
-            },
-            "OFFLINE.next" : {
-              "DROPPED" : "DROPPED",
-              "ONLINE" : "ONLINE"
-            },
-            "ONLINE.meta" : {
-              "count" : "R"
-            },
-            "ONLINE.next" : {
-              "DROPPED" : "OFFLINE",
-              "OFFLINE" : "OFFLINE"
-            }
-          }
-        }
-        > newStateModelDef.json
-        curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
-    ```
-
-* _/clusters/{clusterName}/StateModelDefs/{stateModelDefName}_
-    * Show a state model definition
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
-    ```
-
-* _/clusters/{clusterName}/constraints/{constraintType}_
-    * Show all contraints
-    
-    ```
-      curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
-    ```
-
-    * Set a contraint
-    
-    ```
-       curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
-    ```
-    
-    * Remove a constraint
-    
-    ```
-      curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
-    ```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_controller.md b/src/site/markdown/tutorial_controller.md
deleted file mode 100644
index 8e7e7ad..0000000
--- a/src/site/markdown/tutorial_controller.md
+++ /dev/null
@@ -1,94 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Controller</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Controller
-
-Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
-
-### Start the Helix agent
-
-
-It requires the following parameters:
- 
-* clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
-* instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER:
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system. 
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
-
-```
-      manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                      instanceName,
-                                                      instanceType,
-                                                      zkConnectString);
-```
-
-### Controller Code
-
-The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
-If you need additional functionality, see GenericHelixController on how to configure the pipeline.
-
-```
-      manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.CONTROLLER,
-                                                          zkConnectString);
-     manager.connect();
-     GenericHelixController controller = new GenericHelixController();
-     manager.addConfigChangeListener(controller);
-     manager.addLiveInstanceChangeListener(controller);
-     manager.addIdealStateChangeListener(controller);
-     manager.addExternalViewChangeListener(controller);
-     manager.addControllerListener(controller);
-```
-The snippet above shows how the controller is started. You can also start the controller using command line interface.
-  
-```
-cd helix/helix-core/target/helix-core-pkg/bin
-./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
-```
-
-### Controller deployment modes
-
-Helix provides multiple options to deploy the controller.
-
-#### STANDALONE
-
-The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
-
-Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
-
-#### EMBEDDED
-
-If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
-
-#### CONTROLLER AS A SERVICE
-
-One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters. 
-
-For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_health.md b/src/site/markdown/tutorial_health.md
deleted file mode 100644
index e1a7f3c..0000000
--- a/src/site/markdown/tutorial_health.md
+++ /dev/null
@@ -1,46 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Customizing Heath Checks</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Customizing Health Checks
-
-In this chapter, we\'ll learn how to customize the health check, based on metrics of your distributed system.  
-
-### Health Checks
-
-Note: _this in currently in development mode, not yet ready for production._
-
-Helix provides the ability for each node in the system to report health metrics on a periodic basis. 
-
-Helix supports multiple ways to aggregate these metrics:
-
-* SUM
-* AVG
-* EXPONENTIAL DECAY
-* WINDOW
-
-Helix persists the aggregated value only.
-
-Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert. 
-Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
-This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_messaging.md b/src/site/markdown/tutorial_messaging.md
deleted file mode 100644
index ff73ef0..0000000
--- a/src/site/markdown/tutorial_messaging.md
+++ /dev/null
@@ -1,71 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Messaging</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Messaging
-
-In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature which is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.  
-
-### Example: Bootstrapping a Replica
-
-Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
-
-Helix provides a messaging API for intra-cluster communication between nodes in the system.  Helix provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
-Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
-
-This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
-System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
-
-```
-      ClusterMessagingService messagingService = manager.getMessagingService();
-
-      // Construct the Message
-      Message requestBackupUriRequest = new Message(
-          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
-      requestBackupUriRequest
-          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
-      requestBackupUriRequest.setMsgState(MessageState.NEW);
-
-      // Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
-      Criteria recipientCriteria = new Criteria();
-      recipientCriteria.setInstanceName("%");
-      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
-      recipientCriteria.setResource("MyDB");
-      recipientCriteria.setPartition("");
-
-      // Should be processed only by process(es) that are active at the time of sending the message
-      //   This means if the recipient is restarted after message is sent, it will not be processe.
-      recipientCriteria.setSessionSpecific(true);
-
-      // wait for 30 seconds
-      int timeout = 30000;
-
-      // the handler that will be invoked when any recipient responds to the message.
-      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
-
-      // this will return only after all recipients respond or after timeout
-      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
-          requestBackupUriRequest, responseHandler, timeout);
-```
-
-See HelixManager.DefaultMessagingService in [Javadocs](./apidocs/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
-


[12/16] git commit: [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
[HELIX-270] Include documentation for previous version on the website


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/150ce693
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/150ce693
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/150ce693

Branch: refs/heads/master
Commit: 150ce693942cd7d95179c32242be6598711ad207
Parents: 9f82f6e
Author: Kanak Biscuitwala <ka...@apache.org>
Authored: Fri Nov 15 14:37:39 2013 -0800
Committer: Kanak Biscuitwala <ka...@apache.org>
Committed: Fri Nov 15 14:37:39 2013 -0800

----------------------------------------------------------------------
 pom.xml                                         |   6 +-
 .../releasenotes/release-0.6.0-incubating.apt   |  77 ---
 .../src/site/markdown/Building.md               |  46 ++
 .../0.6.1-incubating/src/site/markdown/index.md |  84 +--
 .../src/site/markdown/tutorial_messaging.md     |   2 +-
 .../src/site/markdown/tutorial_propstore.md     |   2 +-
 .../0.6.1-incubating/src/site/site.xml          |   9 +-
 site-releases/0.6.2-incubating/pom.xml          |  51 ++
 .../src/site/apt/privacy-policy.apt             |  52 ++
 .../releasenotes/release-0.6.2-incubating.apt   | 181 ++++++
 .../0.6.2-incubating/src/site/apt/releasing.apt | 107 ++++
 .../src/site/markdown/Architecture.md           | 252 ++++++++
 .../src/site/markdown/Building.md               |  46 ++
 .../src/site/markdown/Concepts.md               | 275 ++++++++
 .../src/site/markdown/Features.md               | 313 ++++++++++
 .../src/site/markdown/Quickstart.md             | 626 +++++++++++++++++++
 .../src/site/markdown/Tutorial.md               | 205 ++++++
 .../0.6.2-incubating/src/site/markdown/index.md |  58 ++
 .../src/site/markdown/recipes/lock_manager.md   | 253 ++++++++
 .../markdown/recipes/rabbitmq_consumer_group.md | 227 +++++++
 .../recipes/rsync_replicated_file_store.md      | 165 +++++
 .../site/markdown/recipes/service_discovery.md  | 191 ++++++
 .../site/markdown/recipes/task_dag_execution.md | 204 ++++++
 .../src/site/markdown/tutorial_admin.md         | 407 ++++++++++++
 .../src/site/markdown/tutorial_controller.md    |  94 +++
 .../src/site/markdown/tutorial_health.md        |  46 ++
 .../src/site/markdown/tutorial_messaging.md     |  71 +++
 .../src/site/markdown/tutorial_participant.md   | 105 ++++
 .../src/site/markdown/tutorial_propstore.md     |  34 +
 .../src/site/markdown/tutorial_rebalance.md     | 181 ++++++
 .../src/site/markdown/tutorial_spectator.md     |  76 +++
 .../src/site/markdown/tutorial_state.md         | 131 ++++
 .../src/site/markdown/tutorial_throttling.md    |  38 ++
 .../markdown/tutorial_user_def_rebalancer.md    | 172 +++++
 .../src/site/markdown/tutorial_yaml.md          | 102 +++
 .../src/site/resources/.htaccess                |  20 +
 .../src/site/resources/download.cgi             |  22 +
 .../site/resources/images/HELIX-components.png  | Bin 0 -> 82112 bytes
 .../src/site/resources/images/PFS-Generic.png   | Bin 0 -> 72435 bytes
 .../site/resources/images/RSYNC_BASED_PFS.png   | Bin 0 -> 78007 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 0 -> 24919 bytes
 .../resources/images/helix-architecture.png     | Bin 0 -> 282390 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 0 -> 13659 bytes
 .../resources/images/helix-znode-layout.png     | Bin 0 -> 53074 bytes
 .../src/site/resources/images/statemachine.png  | Bin 0 -> 41641 bytes
 .../src/site/resources/images/system.png        | Bin 0 -> 79791 bytes
 .../0.6.2-incubating/src/site/site.xml          | 119 ++++
 .../src/site/xdoc/download.xml.vm               | 193 ++++++
 .../0.6.2-incubating/src/test/conf/testng.xml   |  27 +
 site-releases/0.7.0-incubating/pom.xml          |  51 ++
 .../src/site/apt/privacy-policy.apt             |  52 ++
 .../releasenotes/release-0.7.0-incubating.apt   | 174 ++++++
 .../0.7.0-incubating/src/site/apt/releasing.apt | 107 ++++
 .../src/site/markdown/Architecture.md           | 252 ++++++++
 .../src/site/markdown/Building.md               |  46 ++
 .../src/site/markdown/Concepts.md               | 275 ++++++++
 .../src/site/markdown/Features.md               | 313 ++++++++++
 .../src/site/markdown/Quickstart.md             | 626 +++++++++++++++++++
 .../src/site/markdown/Tutorial.md               | 284 +++++++++
 .../src/site/markdown/UseCases.md               | 113 ++++
 .../0.7.0-incubating/src/site/markdown/index.md |  60 ++
 .../src/site/markdown/recipes/lock_manager.md   | 253 ++++++++
 .../markdown/recipes/rabbitmq_consumer_group.md | 227 +++++++
 .../recipes/rsync_replicated_file_store.md      | 165 +++++
 .../site/markdown/recipes/service_discovery.md  | 191 ++++++
 .../site/markdown/recipes/task_dag_execution.md | 204 ++++++
 .../markdown/recipes/user_def_rebalancer.md     | 285 +++++++++
 .../src/site/markdown/tutorial_accessors.md     | 125 ++++
 .../src/site/markdown/tutorial_admin.md         | 407 ++++++++++++
 .../src/site/markdown/tutorial_controller.md    |  79 +++
 .../src/site/markdown/tutorial_health.md        |  46 ++
 .../src/site/markdown/tutorial_messaging.md     |  71 +++
 .../src/site/markdown/tutorial_participant.md   |  97 +++
 .../src/site/markdown/tutorial_propstore.md     |  34 +
 .../src/site/markdown/tutorial_rebalance.md     | 181 ++++++
 .../src/site/markdown/tutorial_spectator.md     |  76 +++
 .../src/site/markdown/tutorial_state.md         | 131 ++++
 .../src/site/markdown/tutorial_throttling.md    |  38 ++
 .../markdown/tutorial_user_def_rebalancer.md    | 227 +++++++
 .../src/site/markdown/tutorial_yaml.md          | 102 +++
 .../src/site/resources/.htaccess                |  20 +
 .../src/site/resources/download.cgi             |  22 +
 .../site/resources/images/HELIX-components.png  | Bin 0 -> 82112 bytes
 .../src/site/resources/images/PFS-Generic.png   | Bin 0 -> 72435 bytes
 .../site/resources/images/RSYNC_BASED_PFS.png   | Bin 0 -> 78007 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 0 -> 24919 bytes
 .../resources/images/helix-architecture.png     | Bin 0 -> 282390 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 0 -> 13659 bytes
 .../resources/images/helix-znode-layout.png     | Bin 0 -> 53074 bytes
 .../src/site/resources/images/statemachine.png  | Bin 0 -> 41641 bytes
 .../src/site/resources/images/system.png        | Bin 0 -> 79791 bytes
 .../0.7.0-incubating/src/site/site.xml          | 120 ++++
 .../src/site/xdoc/download.xml.vm               | 193 ++++++
 .../0.7.0-incubating/src/test/conf/testng.xml   |  27 +
 site-releases/pom.xml                           |   3 +
 site-releases/trunk/pom.xml                     |  51 ++
 .../trunk/src/site/apt/privacy-policy.apt       |  52 ++
 site-releases/trunk/src/site/apt/releasing.apt  | 107 ++++
 .../trunk/src/site/markdown/Architecture.md     | 252 ++++++++
 .../trunk/src/site/markdown/Building.md         |  29 +
 .../trunk/src/site/markdown/Concepts.md         | 275 ++++++++
 .../trunk/src/site/markdown/Features.md         | 313 ++++++++++
 .../trunk/src/site/markdown/Quickstart.md       | 621 ++++++++++++++++++
 .../trunk/src/site/markdown/Tutorial.md         | 284 +++++++++
 .../trunk/src/site/markdown/UseCases.md         | 113 ++++
 site-releases/trunk/src/site/markdown/index.md  |  56 ++
 .../src/site/markdown/recipes/lock_manager.md   | 253 ++++++++
 .../markdown/recipes/rabbitmq_consumer_group.md | 227 +++++++
 .../recipes/rsync_replicated_file_store.md      | 165 +++++
 .../site/markdown/recipes/service_discovery.md  | 191 ++++++
 .../site/markdown/recipes/task_dag_execution.md | 204 ++++++
 .../markdown/recipes/user_def_rebalancer.md     | 285 +++++++++
 .../src/site/markdown/tutorial_accessors.md     | 125 ++++
 .../trunk/src/site/markdown/tutorial_admin.md   | 407 ++++++++++++
 .../src/site/markdown/tutorial_controller.md    |  79 +++
 .../trunk/src/site/markdown/tutorial_health.md  |  46 ++
 .../src/site/markdown/tutorial_messaging.md     |  71 +++
 .../src/site/markdown/tutorial_participant.md   |  97 +++
 .../src/site/markdown/tutorial_propstore.md     |  34 +
 .../src/site/markdown/tutorial_rebalance.md     | 181 ++++++
 .../src/site/markdown/tutorial_spectator.md     |  76 +++
 .../trunk/src/site/markdown/tutorial_state.md   | 131 ++++
 .../src/site/markdown/tutorial_throttling.md    |  38 ++
 .../markdown/tutorial_user_def_rebalancer.md    | 227 +++++++
 .../trunk/src/site/markdown/tutorial_yaml.md    | 102 +++
 .../trunk/src/site/resources/.htaccess          |  20 +
 .../trunk/src/site/resources/download.cgi       |  22 +
 .../site/resources/images/HELIX-components.png  | Bin 0 -> 82112 bytes
 .../src/site/resources/images/PFS-Generic.png   | Bin 0 -> 72435 bytes
 .../site/resources/images/RSYNC_BASED_PFS.png   | Bin 0 -> 78007 bytes
 .../resources/images/bootstrap_statemodel.gif   | Bin 0 -> 24919 bytes
 .../resources/images/helix-architecture.png     | Bin 0 -> 282390 bytes
 .../src/site/resources/images/helix-logo.jpg    | Bin 0 -> 13659 bytes
 .../resources/images/helix-znode-layout.png     | Bin 0 -> 53074 bytes
 .../src/site/resources/images/statemachine.png  | Bin 0 -> 41641 bytes
 .../trunk/src/site/resources/images/system.png  | Bin 0 -> 79791 bytes
 site-releases/trunk/src/site/site.xml           | 118 ++++
 .../trunk/src/site/xdoc/download.xml.vm         | 193 ++++++
 site-releases/trunk/src/test/conf/testng.xml    |  27 +
 .../releasenotes/release-0.6.2-incubating.apt   | 181 ++++++
 .../releasenotes/release-0.7.0-incubating.apt   | 174 ++++++
 src/site/apt/releasing.apt                      |  55 +-
 src/site/markdown/Concepts.md                   |   2 +-
 src/site/markdown/Features.md                   | 313 ----------
 src/site/markdown/Publications.md               |  37 ++
 src/site/markdown/Quickstart.md                 | 626 -------------------
 src/site/markdown/Tutorial.md                   | 205 ------
 src/site/markdown/index.md                      |  92 +--
 src/site/markdown/involved/building.md          |   6 +-
 src/site/markdown/recipes/lock_manager.md       | 253 --------
 .../markdown/recipes/rabbitmq_consumer_group.md | 227 -------
 .../recipes/rsync_replicated_file_store.md      | 165 -----
 src/site/markdown/recipes/service_discovery.md  | 191 ------
 src/site/markdown/recipes/task_dag_execution.md | 204 ------
 .../markdown/recipes/user_def_rebalancer.md     | 287 ---------
 src/site/markdown/tutorial_admin.md             | 407 ------------
 src/site/markdown/tutorial_controller.md        |  94 ---
 src/site/markdown/tutorial_health.md            |  46 --
 src/site/markdown/tutorial_messaging.md         |  71 ---
 src/site/markdown/tutorial_participant.md       | 105 ----
 src/site/markdown/tutorial_propstore.md         |  34 -
 src/site/markdown/tutorial_rebalance.md         | 181 ------
 src/site/markdown/tutorial_spectator.md         |  76 ---
 src/site/markdown/tutorial_state.md             | 131 ----
 src/site/markdown/tutorial_throttling.md        |  38 --
 .../markdown/tutorial_user_def_rebalancer.md    | 201 ------
 src/site/markdown/tutorial_yaml.md              | 102 ---
 src/site/resources/images/PFS-Generic.png       | Bin 72435 -> 0 bytes
 src/site/resources/images/RSYNC_BASED_PFS.png   | Bin 78007 -> 0 bytes
 src/site/site.xml                               |  34 +-
 170 files changed, 16738 insertions(+), 4219 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 292ec3a..010023c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -311,7 +311,7 @@ under the License.
     <helix.release.arguments>-Papache-release</helix.release.arguments>
 
     <!-- for release changelog and download pages -->
-    <currentRelease>0.6.1-incubating</currentRelease>
+    <currentRelease>0.7.0-incubating</currentRelease>
 
     <!-- OSGi Properties -->
     <osgi.import />
@@ -507,6 +507,10 @@ under the License.
             <content>${helix.siteFilePath}</content>
             <checkoutDirectory>${helix.scmPubCheckoutDirectory}</checkoutDirectory>
             <skipDeletedFiles>${scmSkipDeletedFiles}</skipDeletedFiles>
+            <ignorePathsToDelete>
+              <ignorePathToDelete>javadocs</ignorePathToDelete>
+              <ignorePathToDelete>javadocs**</ignorePathToDelete>
+            </ignorePathsToDelete>
           </configuration>
           <dependencies>
             <dependency>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.1-incubating/src/site/apt/releasenotes/release-0.6.0-incubating.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/apt/releasenotes/release-0.6.0-incubating.apt b/site-releases/0.6.1-incubating/src/site/apt/releasenotes/release-0.6.0-incubating.apt
deleted file mode 100644
index 16e2fbf..0000000
--- a/site-releases/0.6.1-incubating/src/site/apt/releasenotes/release-0.6.0-incubating.apt
+++ /dev/null
@@ -1,77 +0,0 @@
- -----
- Release Notes for 0.6.0-incubating Apache Helix
- -----
-
-~~ Licensed to the Apache Software Foundation (ASF) under one                      
-~~ or more contributor license agreements.  See the NOTICE file                    
-~~ distributed with this work for additional information                           
-~~ regarding copyright ownership.  The ASF licenses this file                      
-~~ to you under the Apache License, Version 2.0 (the                               
-~~ "License"); you may not use this file except in compliance                      
-~~ with the License.  You may obtain a copy of the License at                      
-~~                                                                                 
-~~   http://www.apache.org/licenses/LICENSE-2.0                                    
-~~                                                                                 
-~~ Unless required by applicable law or agreed to in writing,                      
-~~ software distributed under the License is distributed on an                     
-~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY                          
-~~ KIND, either express or implied.  See the License for the                       
-~~ specific language governing permissions and limitations                         
-~~ under the License.
-
-~~ NOTE: For help with the syntax of this file, see:
-~~ http://maven.apache.org/guides/mini/guide-apt-format.html
-
-Release Notes for 0.6.0-incubating Apache Helix
-
-  The Apache Helix would like to announce the release of Apache Helix 0.6.0-incubating
-
-  This is the first release in Apache umbrella.
-
-  Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix provides the following features:
-
-  * Automatic assignment of resource/partition to nodes
-
-  * Node failure detection and recovery
-
-  * Dynamic addition of Resources
-
-  * Dynamic addition of nodes to the cluster
-
-  * Pluggable distributed state machine to manage the state of a resource via state transitions
-
-  * Automatic load balancing and throttling of transitions
-
-  []
-
-* Changes
-
-** Bug
-
- * [HELIX-1] - Use org.apache.helix package for java sources.
-
- * [HELIX-2] - Remove jsqlparser dependency from Helix
-
- * [HELIX-3] - Fix license headers in sources.
-
- * [HELIX-12] - Issue with starting multiple controllers with same name
- 
- * [HELIX-14] - error in helix-core ivy file
-
- []
-
-** Task
-
-  * [HELIX-4] - Remove deprecated file based implementation
-
-  * [HELIX-5] - Remove deprecated Accessors
-
-  * [HELIX-13] - New usecase to replicate files between replicas using simple rsync
-
-  * [HELIX-15] - Distributed lock manager recipe
-
-  []
-
-  Have Fun
-  --
-  The Apache Helix Team

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.1-incubating/src/site/markdown/Building.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/Building.md b/site-releases/0.6.1-incubating/src/site/markdown/Building.md
new file mode 100644
index 0000000..f79193e
--- /dev/null
+++ b/site-releases/0.6.1-incubating/src/site/markdown/Building.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Build Instructions
+------------------
+
+Requirements: Jdk 1.6+, Maven 2.0.8+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.6.1-incubating
+mvn install package -DskipTests
+```
+
+Maven dependency
+
+```
+<dependency>
+  <groupId>org.apache.helix</groupId>
+  <artifactId>helix-core</artifactId>
+  <version>0.6.1-incubating</version>
+</dependency>
+```
+
+Download
+--------
+
+[0.6.1-incubating](./download.html)
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.1-incubating/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/index.md b/site-releases/0.6.1-incubating/src/site/markdown/index.md
index 632d95c..a358d88 100644
--- a/site-releases/0.6.1-incubating/src/site/markdown/index.md
+++ b/site-releases/0.6.1-incubating/src/site/markdown/index.md
@@ -28,11 +28,13 @@ Navigating the Documentation
 
 ### Hands-on Helix
 
+[Getting Helix](./Building.html)
+
 [Quickstart](./Quickstart.html)
 
 [Tutorial](./Tutorial.html)
 
-[Javadocs](./apidocs/index.html)
+[Javadocs](http://helix.incubator.apache.org/javadocs/0.6.1-incubating/index.html)
 
 ### Recipes
 
@@ -50,83 +52,3 @@ Navigating the Documentation
 
 [0.6.1-incubating](./download.html)
 
-
-What Is Helix
---------------
-Helix is a generic _cluster management_ framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. 
-
-
-What Is Cluster Management
---------------------------
-To understand Helix, first you need to understand what is _cluster management_.  A distributed system typically runs on multiple nodes for the following reasons:
-
-* scalability
-* fault tolerance
-* load balancing
-
-Each node performs one or more of the primary function of the cluster, such as storing/serving data, producing/consuming data streams, etc.  Once configured for your system, Helix acts as the global brain for the system.  It is designed to make decisions that cannot be made in isolation.  Examples of decisions that require global knowledge and coordination:
-
-* scheduling of maintainence tasks, such as backups, garbage collection, file consolidation, index rebuilds
-* repartitioning of data or resources across the cluster
-* informing dependent systems of changes so they can react appropriately to cluster changes
-* throttling system tasks and changes
-
-While it is possible to integrate these functions into the distributed system, it complicates the code.  Helix has abstracted common cluster management tasks, enabling the system builder to model the desired behavior in a declarative state model, and let Helix manage the coordination.  The result is less new code to write, and a robust, highly operable system.
-
-
-Key Features of Helix
----------------------
-1. Automatic assignment of resource/partition to nodes
-2. Node failure detection and recovery
-3. Dynamic addition of Resources 
-4. Dynamic addition of nodes to the cluster
-5. Pluggable distributed state machine to manage the state of a resource via state transitions
-6. Automatic load balancing and throttling of transitions 
-
-
-Why Helix
----------
-Modeling a distributed system as a state machine with constraints on state and transitions has the following benefits:
-
-* Separates cluster management from the core functionality.
-* Quick transformation from a single node system to an operable, distributed system.
-* Simplicity: System components do not have to manage global cluster.  This division of labor makes it easier to build, debug, and maintain your system.
-
-
-Build Instructions
-------------------
-
-Requirements: Jdk 1.6+, Maven 2.0.8+
-
-```
-    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-    cd incubator-helix
-    git checkout tags/helix-0.6.1-incubating
-    mvn install package -DskipTests 
-```
-
-Maven dependency
-
-```
-    <dependency>
-      <groupId>org.apache.helix</groupId>
-      <artifactId>helix-core</artifactId>
-      <version>0.6.1-incubating</version>
-    </dependency>
-```
-
-[Download](./download.html) Helix artifacts from here.
-   
-Publications
--------------
-
-* Untangling cluster management using Helix at [SOCC Oct 2012](http://www.socc2012.org/home/program)  
-    - [paper](https://915bbc94-a-62cb3a1a-s-sites.googlegroups.com/site/acm2012socc/helix_onecol.pdf)
-    - [presentation](http://www.slideshare.net/KishoreGopalakrishna/helix-socc-v10final)
-* Building distributed systems using Helix Apache Con Feb 2013
-    - [presentation at ApacheCon](http://www.slideshare.net/KishoreGopalakrishna/apache-con-buildingddsusinghelix)
-    - [presentation at VMWare](http://www.slideshare.net/KishoreGopalakrishna/apache-helix-presentation-at-vmware)
-* Data driven testing:
-    - [short talk at LSPE meetup](http://www.slideshare.net/KishoreGopalakrishna/data-driven-testing)
-    - [paper DBTest 2013 acm SIGMOD:will be published on Jun 24, 2013](http://dbtest2013.soe.ucsc.edu/Program.htm)
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.1-incubating/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/tutorial_messaging.md b/site-releases/0.6.1-incubating/src/site/markdown/tutorial_messaging.md
index 2dda826..4b46671 100644
--- a/site-releases/0.6.1-incubating/src/site/markdown/tutorial_messaging.md
+++ b/site-releases/0.6.1-incubating/src/site/markdown/tutorial_messaging.md
@@ -63,5 +63,5 @@ System Admins can also perform ad-hoc tasks, such as on-demand backups or a syst
           requestBackupUriRequest, responseHandler, timeout);
 ```
 
-See HelixManager.DefaultMessagingService in [Javadocs](./apidocs/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
+See HelixManager.DefaultMessagingService in [Javadocs](http://helix.incubator.apache.org/javadocs/0.6.1-incubating/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.1-incubating/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/markdown/tutorial_propstore.md b/site-releases/0.6.1-incubating/src/site/markdown/tutorial_propstore.md
index af3c60d..4ee9299 100644
--- a/site-releases/0.6.1-incubating/src/site/markdown/tutorial_propstore.md
+++ b/site-releases/0.6.1-incubating/src/site/markdown/tutorial_propstore.md
@@ -27,4 +27,4 @@ It is common that an application needs support for distributed, shared data stru
 
 While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
 
-See [HelixManager.getHelixPropertyStore](./apidocs/reference/org/apache/helix/store/package-summary.html) for details.
+See [HelixManager.getHelixPropertyStore](http://helix.incubator.apache.org/javadocs/0.6.1-incubating/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.1-incubating/src/site/site.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/src/site/site.xml b/site-releases/0.6.1-incubating/src/site/site.xml
index 5ab7f87..7326162 100644
--- a/site-releases/0.6.1-incubating/src/site/site.xml
+++ b/site-releases/0.6.1-incubating/src/site/site.xml
@@ -58,13 +58,18 @@
       <item name="Release 0.6.1-incubating" href="http://helix.incubator.apache.org/site-releases/0.6.1-incubating-site/"/>
     </breadcrumbs>
 
-    <menu name="Helix 0.6.1-incubating">
+    <menu name="Apache Helix">
       <item name="Home" href="../../index.html"/>
+    </menu>
+
+    <menu name="Helix 0.6.1-incubating">
       <item name="Introduction" href="./index.html"/>
+      <item name="Getting Helix" href="./Building.html"/>
       <item name="Core concepts" href="./Concepts.html"/>
       <item name="Architecture" href="./Architecture.html"/>
       <item name="Quick Start" href="./Quickstart.html"/>
       <item name="Tutorial" href="./Tutorial.html"/>
+      <item name="Release Notes" href="releasenotes/release-0.6.1-incubating.html"/>
       <item name="Download" href="./download.html"/>
     </menu>
 
@@ -101,7 +106,7 @@
     <fluidoSkin>
       <topBarEnabled>true</topBarEnabled>
       <!-- twitter link work only with sidebar disabled -->
-      <sideBarEnabled>false</sideBarEnabled>
+      <sideBarEnabled>true</sideBarEnabled>
       <googleSearch></googleSearch>
       <twitter>
         <user>ApacheHelix</user>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/pom.xml b/site-releases/0.6.2-incubating/pom.xml
new file mode 100644
index 0000000..471ea4c
--- /dev/null
+++ b/site-releases/0.6.2-incubating/pom.xml
@@ -0,0 +1,51 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <groupId>org.apache.helix</groupId>
+    <artifactId>site-releases</artifactId>
+    <version>0.7.1-incubating-SNAPSHOT</version>
+  </parent>
+
+  <artifactId>0.6.2-incubating-site</artifactId>
+  <packaging>bundle</packaging>
+  <name>Apache Helix :: Site :: 0.6.2-incubating</name>
+
+  <properties>
+  </properties>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.testng</groupId>
+      <artifactId>testng</artifactId>
+      <version>6.0.1</version>
+    </dependency>
+  </dependencies>
+  <build>
+    <pluginManagement>
+      <plugins>
+      </plugins>
+    </pluginManagement>
+    <plugins>
+    </plugins>
+  </build>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/apt/privacy-policy.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/apt/privacy-policy.apt b/site-releases/0.6.2-incubating/src/site/apt/privacy-policy.apt
new file mode 100644
index 0000000..ada9363
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/apt/privacy-policy.apt
@@ -0,0 +1,52 @@
+ ----
+ Privacy Policy
+ -----
+ Olivier Lamy
+ -----
+ 2013-02-04
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one
+~~ or more contributor license agreements.  See the NOTICE file
+~~ distributed with this work for additional information
+~~ regarding copyright ownership.  The ASF licenses this file
+~~ to you under the Apache License, Version 2.0 (the
+~~ "License"); you may not use this file except in compliance
+~~ with the License.  You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing,
+~~ software distributed under the License is distributed on an
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+~~ KIND, either express or implied.  See the License for the
+~~ specific language governing permissions and limitations
+~~ under the License.
+
+Privacy Policy
+
+  Information about your use of this website is collected using server access logs and a tracking cookie. The 
+  collected information consists of the following:
+
+  [[1]] The IP address from which you access the website;
+  
+  [[2]] The type of browser and operating system you use to access our site;
+  
+  [[3]] The date and time you access our site;
+  
+  [[4]] The pages you visit; and
+  
+  [[5]] The addresses of pages from where you followed a link to our site.
+
+  []
+
+  Part of this information is gathered using a tracking cookie set by the 
+  {{{http://www.google.com/analytics/}Google Analytics}} service and handled by Google as described in their 
+  {{{http://www.google.com/privacy.html}privacy policy}}. See your browser documentation for instructions on how to 
+  disable the cookie if you prefer not to share this data with Google.
+
+  We use the gathered information to help us make our site more useful to visitors and to better understand how and 
+  when our site is used. We do not track or collect personally identifiable information or associate gathered data 
+  with any personally identifying information from other sources.
+
+  By using this website, you consent to the collection of this data in the manner and for the purpose described above.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/apt/releasenotes/release-0.6.2-incubating.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/apt/releasenotes/release-0.6.2-incubating.apt b/site-releases/0.6.2-incubating/src/site/apt/releasenotes/release-0.6.2-incubating.apt
new file mode 100644
index 0000000..51afc62
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/apt/releasenotes/release-0.6.2-incubating.apt
@@ -0,0 +1,181 @@
+ -----
+ Release Notes for Apache Helix 0.6.2-incubating
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one                      
+~~ or more contributor license agreements.  See the NOTICE file                    
+~~ distributed with this work for additional information                           
+~~ regarding copyright ownership.  The ASF licenses this file                      
+~~ to you under the Apache License, Version 2.0 (the                               
+~~ "License"); you may not use this file except in compliance                      
+~~ with the License.  You may obtain a copy of the License at                      
+~~                                                                                 
+~~   http://www.apache.org/licenses/LICENSE-2.0                                    
+~~                                                                                 
+~~ Unless required by applicable law or agreed to in writing,                      
+~~ software distributed under the License is distributed on an                     
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY                          
+~~ KIND, either express or implied.  See the License for the                       
+~~ specific language governing permissions and limitations                         
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Release Notes for Apache Helix 0.6.2-incubating
+
+  The Apache Helix team would like to announce the release of Apache Helix 0.6.2-incubating
+
+  This is the third release under the Apache umbrella.
+
+  Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix provides the following features:
+
+  * Automatic assignment of resource/partition to nodes
+
+  * Node failure detection and recovery
+
+  * Dynamic addition of Resources
+
+  * Dynamic addition of nodes to the cluster
+
+  * Pluggable distributed state machine to manage the state of a resource via state transitions
+
+  * Automatic load balancing and throttling of transitions
+
+  []
+
+* Changes
+
+** Sub-task
+
+  * [HELIX-28] - ZkHelixManager.handleNewSession() can happen when a liveinstance already exists
+
+  * [HELIX-85] - Remove mock service module
+
+  * [HELIX-106] - Remove all string constants in the code
+
+  * [HELIX-107] - Add support to set custom objects into ZNRecord
+
+  * [HELIX-124] - race condition in ZkHelixManager.handleNewSession()
+
+  * [HELIX-165] - Add dependency for Guava libraries
+
+  * [HELIX-169] - Take care of consecutive handleNewSession() and session expiry during handleNewSession() 
+
+  * [HELIX-170] - HelixManager#isLeader() should compare both instanceName and sessionId 
+
+  * [HELIX-195] - Race condition between FINALIZE callbacks and Zk Callbacks
+
+  * [HELIX-207] - Add javadocs to classes and public methods in the top-level package
+
+  * [HELIX-208] - Add javadocs to classes and public methods in the model package
+
+  * [HELIX-277] - FULL_AUTO rebalancer should not prefer nodes that are just coming up
+
+** Bug
+
+  * [HELIX-7] - Tune test parameters to fix random test failures
+
+  * [HELIX-87] - Bad repository links in website
+
+  * [HELIX-117] - backward incompatibility problem in accessing zkPath vis HelixWebAdmin
+
+  * [HELIX-118] - PropertyStore -> HelixPropertyStore backwards incompatible location
+
+  * [HELIX-119] - HelixManager serializer no longer needs ByteArraySerializer for /PROPERTYSTORE
+
+  * [HELIX-129] - ZKDumper should use byte[] instead of String to read/write file/zk
+
+  * [HELIX-131] - Connection timeout not set while connecting to zookeeper via zkHelixAdmin
+
+  * [HELIX-133] - Cluster-admin command parsing does not work with removeConfig
+
+  * [HELIX-140] - In ClusterSetup.java, the removeConfig is wrong wired to getConfig
+
+  * [HELIX-141] - Autorebalance does not work reliably and fails when replica>1
+
+  * [HELIX-144] - Need to validate StateModelDefinition when adding new StateModelDefinition to Cluster
+
+  * [HELIX-147] - Fix typo in Idealstate property max_partitions_per_instance
+
+  * [HELIX-148] - Current preferred placement for auto rebalace is suboptimal for n > p
+
+  * [HELIX-150] - Auto rebalance might not evenly distribute states across nodes
+
+  * [HELIX-151] - Auto rebalance doesn't assign some replicas when other nodes could make room
+
+  * [HELIX-153] - Auto rebalance tester uses the returned map fields, but production uses only list fields
+
+  * [HELIX-155] - PropertyKey.instances() is wrongly wired to CONFIG type instead of INSTANCES type
+
+  * [HELIX-197] - state model leak
+
+  * [HELIX-199] - ZNRecord should not publish rawPayload unless it exists
+
+  * [HELIX-216] - Allow HelixAdmin addResource to accept the old rebalancing types
+
+  * [HELIX-221] - Can't find default error->dropped transition method using name convention
+
+  * [HELIX-257] - Upgrade Restlet to 2.1.4 - due security flaw
+
+  * [HELIX-258] - Upgrade Apache Camel due to CVE-2013-4330
+
+  * [HELIX-264] - fix zkclient#close() bug
+
+  * [HELIX-279] - Apply gc handling fixes to main ZKHelixManager class
+
+  * [HELIX-280] - Full auto rebalancer should check for resource tag first
+
+  * [HELIX-288] - helix-core uses an old version of guava
+
+  * [HELIX-299] - Some files in 0.6.2 are missing license headers
+
+** Improvement
+
+  * [HELIX-20] - AUTO-REBALANCE helix controller should re-assign disabled partitions on a node to other available nodes
+
+  * [HELIX-70] - Make Helix OSGi ready
+
+  * [HELIX-149] - Allow clients to pass in preferred placement strategies
+
+  * [HELIX-198] - Unify helix code style
+
+  * [HELIX-218] - Add a reviewboard submission script
+
+  * [HELIX-284] - Support participant auto join in YAML cluster setup
+
+** New Feature
+
+  * [HELIX-215] - Allow setting up the cluster with a YAML file
+
+** Task
+
+  * [HELIX-95] - Tracker for 0.6.2 release
+
+  * [HELIX-154] - Auto rebalance algorithm should not depend on state
+
+  * [HELIX-166] - Rename modes to auto, semi-auto, and custom
+
+  * [HELIX-173] - Move rebalancing strategies to separate classes that implement the Rebalancer interface
+
+  * [HELIX-188] - Add admin command line / REST API documentations
+
+  * [HELIX-194] - ZNRecord has too many constructors
+
+  * [HELIX-205] - Have user-defined rebalancers use RebalanceMode.USER_DEFINED
+
+  * [HELIX-210] - Add support to set data with expect version in BaseDataAccessor
+
+  * [HELIX-217] - Remove mock service module
+
+  * [HELIX-273] - Rebalancer interface should remain unchanged in 0.6.2
+
+  * [HELIX-274] - Verify FULL_AUTO tagged node behavior
+
+  * [HELIX-285] - add integration test util's
+
+  []
+
+  Cheers,
+  --
+  The Apache Helix Team

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/apt/releasing.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/apt/releasing.apt b/site-releases/0.6.2-incubating/src/site/apt/releasing.apt
new file mode 100644
index 0000000..11d0cd9
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/apt/releasing.apt
@@ -0,0 +1,107 @@
+ -----
+ Helix release process
+ -----
+ -----
+ 2012-12-15
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one
+~~ or more contributor license agreements.  See the NOTICE file
+~~ distributed with this work for additional information
+~~ regarding copyright ownership.  The ASF licenses this file
+~~ to you under the Apache License, Version 2.0 (the
+~~ "License"); you may not use this file except in compliance
+~~ with the License.  You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing,
+~~ software distributed under the License is distributed on an
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+~~ KIND, either express or implied.  See the License for the
+~~ specific language governing permissions and limitations
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Helix release process
+
+ [[1]] Post to the dev list a few days before you plan to do an Helix release
+
+ [[2]] Your maven setting must contains the entry to be able to deploy.
+
+ ~/.m2/settings.xml
+
++-------------
+   <server>
+     <id>apache.releases.https</id>
+     <username></username>
+     <password></password>
+   </server>
++-------------
+
+ [[3]] Apache DAV passwords
+
++-------------
+ Add the following info into your ~/.netrc
+ machine git-wip-us.apache.org login <apache username> <password>
+
++-------------
+ [[4]] Release Helix
+    You should have a GPG agent running in the session you will run the maven release commands(preferred), and confirm it works by running "gpg -ab" (type some text and press Ctrl-D).
+    If you do not have a GPG agent running, make sure that you have the "apache-release" profile set in your settings.xml as shown below.
+
+   Run the release
+
++-------------
+mvn release:prepare release:perform -B
++-------------
+
+  GPG configuration in maven settings xml:
+
++-------------
+<profile>
+  <id>apache-release</id>
+  <properties>
+    <gpg.passphrase>[GPG_PASSWORD]</gpg.passphrase>
+  </properties>
+</profile>
++-------------
+
+ [[4]] go to https://repository.apache.org and close your staged repository. Note the repository url (format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
+
++-------------
+svn co https://dist.apache.org/repos/dist/dev/incubator/helix helix-dev-release
+cd helix-dev-release
+sh ./release-script-svn.sh version stagingRepoUrl
+then svn add <new directory created with new version as name>
+then svn ci 
++-------------
+
+ [[5]] Validating the release
+
++-------------
+  * Download sources, extract, build and run tests - mvn clean package
+  * Verify license headers - mvn -Prat -DskipTests
+  * Download binaries and .asc files
+  * Download release manager's public key - From the KEYS file, get the release manager's public key finger print and run  gpg --keyserver pgpkeys.mit.edu --recv-key <key>
+  * Validate authenticity of key - run  gpg --fingerprint <key>
+  * Check signatures of all the binaries using gpg <binary>
++-------------
+
+ [[6]] Call for a vote in the dev list and wait for 72 hrs. for the vote results. 3 binding votes are necessary for the release to be finalized. example
+  After the vote has passed, move the files from dist dev to dist release: svn mv https://dist.apache.org/repos/dist/dev/incubator/helix/version to https://dist.apache.org/repos/dist/release/incubator/helix/
+
+ [[7]] Prepare release note. Add a page in src/site/apt/releasenotes/ and change value of \<currentRelease> in parent pom.
+
+
+ [[8]] Send out an announcement of the release to:
+
+  * users@helix.incubator.apache.org
+
+  * dev@helix.incubator.apache.org
+
+ [[9]] Celebrate !
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md b/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md
new file mode 100644
index 0000000..933e917
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Architecture.md
@@ -0,0 +1,252 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Architecture</title>
+</head>
+
+Architecture
+----------------------------
+Helix aims to provide the following abilities to a distributed system:
+
+* Automatic management of a cluster hosting partitioned, replicated resources.
+* Soft and hard failure detection and handling.
+* Automatic load balancing via smart placement of resources on servers(nodes) based on server capacity and resource profile (size of partition, access patterns, etc).
+* Centralized config management and self discovery. Eliminates the need to modify config on each node.
+* Fault tolerance and optimized rebalancing during cluster expansion.
+* Manages entire operational lifecycle of a node. Addition, start, stop, enable/disable without downtime.
+* Monitor cluster health and provide alerts on SLA violation.
+* Service discovery mechanism to route requests.
+
+To build such a system, we need a mechanism to co-ordinate between different nodes and other components in the system. This mechanism can be achieved with software that reacts to any change in the cluster and comes up with a set of tasks needed to bring the cluster to a stable state. The set of tasks will be assigned to one or more nodes in the cluster. Helix serves this purpose of managing the various components in the cluster.
+
+![Helix Design](images/system.png)
+
+Distributed System Components
+
+In general any distributed system cluster will have the following components and properties:
+
+* A set of nodes also referred to as instances.
+* A set of resources which can be databases, lucene indexes or tasks.
+* Each resource is also partitioned into one or more Partitions. 
+* Each partition may have one or more copies called replicas.
+* Each replica can have a state associated with it. For example Master, Slave, Leader, Standby, Online, Offline etc
+
+Roles
+-----
+
+![Helix Design](images/HELIX-components.png)
+
+Not all nodes in a distributed system will perform similar functionalities. For example, a few nodes might be serving requests and a few nodes might be sending requests, and some nodes might be controlling the nodes in the cluster. Thus, Helix categorizes nodes by their specific roles in the system.
+
+We have divided Helix nodes into 3 logical components based on their responsibilities:
+
+1. Participant: The nodes that actually host the distributed resources.
+2. Spectator: The nodes that simply observe the Participant state and route the request accordingly. Routers, for example, need to know the instance on which a partition is hosted and its state in order to route the request to the appropriate end point.
+3. Controller: The controller observes and controls the Participant nodes. It is responsible for coordinating all transitions in the cluster and ensuring that state constraints are satisfied and cluster stability is maintained. 
+
+These are simply logical components and can be deployed as per the system requirements. For example:
+
+1. The controller can be deployed as a separate service
+2. The controller can be deployed along with a Participant but only one Controller will be active at any given time.
+
+Both have pros and cons, which will be discussed later and one can chose the mode of deployment as per system needs.
+
+
+## Cluster state metadata store
+
+We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state. Helix uses Zookeeper to achieve this functionality.
+
+Zookeeper provides:
+
+* A way to represent PERSISTENT state which basically remains until its deleted.
+* A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies.
+* Notification mechanism when there is a change in PERSISTENT and EPHEMERAL state
+
+The namespace provided by ZooKeeper is much like that of a standard file system. A name is a sequence of path elements separated by a slash (/). Every node[ZNode] in ZooKeeper\'s namespace is identified by a path.
+
+More info on Zookeeper can be found at http://zookeeper.apache.org
+
+## State machine and constraints
+
+Even though the concepts of Resources, Partitions, and Replicas are common to most distributed systems, one thing that differentiates one distributed system from another is the way each partition is assigned a state and the constraints on each state.
+
+For example:
+
+1. If a system is serving read-only data then all partition\'s replicas are equal and they can either be ONLINE or OFFLINE.
+2. If a system takes _both_ reads and writes but ensure that writes go through only one partition, the states will be MASTER, SLAVE, and OFFLINE. Writes go through the MASTER and replicate to the SLAVEs. Optionally, reads can go through SLAVES.
+
+Apart from defining state for each partition, the transition path to each state can be application specific. For example, in order to become MASTER it might be a requirement to first become a SLAVE. This ensures that if the SLAVE does not have the data as part of OFFLINE-SLAVE transition it can bootstrap data from other nodes in the system.
+
+Helix provides a way to configure an application specific state machine along with constraints on each state. Along with constraints on STATE, Helix also provides a way to specify constraints on transitions.  (More on this later.)
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+![Helix Design](images/statemachine.png)
+
+## Concepts
+
+The following terminologies are used in Helix to model a state machine.
+
+* IdealState: The state in which we need the cluster to be in if all nodes are up and running. In other words, all state constraints are satisfied.
+* CurrentState: Represents the actual current state of each node in the cluster 
+* ExternalView: Represents the combined view of CurrentState of all nodes.  
+
+The goal of Helix is always to make the CurrentState of the system same as the IdealState. Some scenarios where this may not be true are:
+
+* When all nodes are down
+* When one or more nodes fail
+* New nodes are added and the partitions need to be reassigned
+
+### IdealState
+
+Helix lets the application define the IdealState on a resource basis which basically consists of:
+
+* List of partitions. Example: 64
+* Number of replicas for each partition. Example: 3
+* Node and State for each replica.
+
+Example:
+
+* Partition-1, replica-1, Master, Node-1
+* Partition-1, replica-2, Slave, Node-2
+* Partition-1, replica-3, Slave, Node-3
+* .....
+* .....
+* Partition-p, replica-3, Slave, Node-n
+
+Helix comes with various algorithms to automatically assign the partitions to nodes. The default algorithm minimizes the number of shuffles that happen when new nodes are added to the system.
+
+### CurrentState
+
+Every instance in the cluster hosts one or more partitions of a resource. Each of the partitions has a state associated with it.
+
+Example Node-1
+
+* Partition-1, Master
+* Partition-2, Slave
+* ....
+* ....
+* Partition-p, Slave
+
+### ExternalView
+
+External clients needs to know the state of each partition in the cluster and the Node hosting that partition. Helix provides one view of the system to Spectators as _ExternalView_. ExternalView is simply an aggregate of all node CurrentStates.
+
+* Partition-1, replica-1, Master, Node-1
+* Partition-1, replica-2, Slave, Node-2
+* Partition-1, replica-3, Slave, Node-3
+* .....
+* .....
+* Partition-p, replica-3, Slave, Node-n
+
+## Process Workflow
+
+Mode of operation in a cluster
+
+A node process can be one of the following:
+
+* Participant: The process registers itself in the cluster and acts on the messages received in its queue and updates the current state.  Example: a storage node in a distributed database
+* Spectator: The process is simply interested in the changes in the Externalview.
+* Controller: This process actively controls the cluster by reacting to changes in cluster state and sending messages to Participants.
+
+
+### Participant Node Process
+
+* When Node starts up, it registers itself under _LiveInstances_
+* After registering, it waits for new _Messages_ in the message queue
+* When it receives a message, it will perform the required task as indicated in the message
+* After the task is completed, depending on the task outcome it updates the CurrentState
+
+### Controller Process
+
+* Watches IdealState
+* Notified when a node goes down/comes up or node is added/removed. Watches LiveInstances and CurrentState of each node in the cluster
+* Triggers appropriate state transitions by sending message to Participants
+
+### Spectator Process
+
+* When the process starts, it asks the Helix agent to be notified of changes in ExternalView
+* Whenever it receives a notification, it reads the Externalview and performs required duties.
+
+#### Interaction between controller, participant and spectator
+
+The following picture shows how controllers, participants and spectators interact with each other.
+
+![Helix Architecture](images/helix-architecture.png)
+
+## Core algorithm
+
+* Controller gets the IdealState and the CurrentState of active storage nodes from Zookeeper
+* Compute the delta between IdealState and CurrentState for each partition across all participant nodes
+* For each partition compute tasks based on the State Machine Table. It\'s possible to configure priority on the state Transition. For example, in case of Master-Slave:
+    * Attempt mastership transfer if possible without violating constraint.
+    * Partition Addition
+    * Drop Partition 
+* Add the tasks in parallel if possible to the respective queue for each storage node (if the tasks added are mutually independent)
+* If a task is dependent on another task being completed, do not add that task
+* After any task is completed by a Participant, Controllers gets notified of the change and the State Transition algorithm is re-run until the CurrentState is same as IdealState.
+
+## Helix ZNode layout
+
+Helix organizes znodes under clusterName in multiple levels. 
+
+The top level (under the cluster name) ZNodes are all Helix-defined and in upper case:
+
+* PROPERTYSTORE: application property store
+* STATEMODELDEFES: state model definitions
+* INSTANCES: instance runtime information including current state and messages
+* CONFIGS: configurations
+* IDEALSTATES: ideal states
+* EXTERNALVIEW: external views
+* LIVEINSTANCES: live instances
+* CONTROLLER: cluster controller runtime information
+
+Under INSTANCES, there are runtime ZNodes for each instance. An instance organizes ZNodes as follows:
+
+* CURRENTSTATES
+    * sessionId
+    * resourceName
+* ERRORS
+* STATUSUPDATES
+* MESSAGES
+* HEALTHREPORT
+
+Under CONFIGS, there are different scopes of configurations:
+
+* RESOURCE: contains resource scope configurations
+* CLUSTER: contains cluster scope configurations
+* PARTICIPANT: contains participant scope configurations
+
+The following image shows an example of Helix znodes layout for a cluster named "test-cluster":
+
+![Helix znode layout](images/helix-znode-layout.png)

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/Building.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Building.md b/site-releases/0.6.2-incubating/src/site/markdown/Building.md
new file mode 100644
index 0000000..bf9462b
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Building.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Build Instructions
+------------------
+
+Requirements: Jdk 1.6+, Maven 2.0.8+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.6.2-incubating
+mvn install package -DskipTests
+```
+
+Maven dependency
+
+```
+<dependency>
+  <groupId>org.apache.helix</groupId>
+  <artifactId>helix-core</artifactId>
+  <version>0.6.2-incubating</version>
+</dependency>
+```
+
+Download
+--------
+
+[0.6.2-incubating](./download.html)
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md b/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md
new file mode 100644
index 0000000..fa5d0ba
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Concepts.md
@@ -0,0 +1,275 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Concepts</title>
+</head>
+
+Concepts
+----------------------------
+
+Helix is based on the idea that a given task has the following attributes associated with it:
+
+* _Location of the task_. For example it runs on Node N1
+* _State_. For example, it is running, stopped etc.
+
+In Helix terminology, a task is referred to as a _resource_.
+
+### IdealState
+
+IdealState simply allows one to map tasks to location and state. A standard way of expressing this in Helix:
+
+```
+  "TASK_NAME" : {
+    "LOCATION" : "STATE"
+  }
+
+```
+Consider a simple case where you want to launch a task \'myTask\' on node \'N1\'. The IdealState for this can be expressed as follows:
+
+```
+{
+  "id" : "MyTask",
+  "mapFields" : {
+    "myTask" : {
+      "N1" : "ONLINE",
+    }
+  }
+}
+```
+### Partition
+
+If this task get too big to fit on one box, you might want to divide it into subtasks. Each subtask is referred to as a _partition_ in Helix. Let\'s say you want to divide the task into 3 subtasks/partitions, the IdealState can be changed as shown below. 
+
+\'myTask_0\', \'myTask_1\', \'myTask_2\' are logical names representing the partitions of myTask. Each tasks runs on N1, N2 and N3 respectively.
+
+```
+{
+  "id" : "myTask",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+  }
+ "mapFields" : {
+    "myTask_0" : {
+      "N1" : "ONLINE",
+    },
+    "myTask_1" : {
+      "N2" : "ONLINE",
+    },
+    "myTask_2" : {
+      "N3" : "ONLINE",
+    }
+  }
+}
+```
+
+### Replica
+
+Partitioning allows one to split the data/task into multiple subparts. But let\'s say the request rate for each partition increases. The common solution is to have multiple copies for each partition. Helix refers to the copy of a partition as a _replica_.  Adding a replica also increases the availability of the system during failures. One can see this methodology employed often in search systems. The index is divided into shards, and each shard has multiple copies.
+
+Let\'s say you want to add one additional replica for each task. The IdealState can simply be changed as shown below. 
+
+For increasing the availability of the system, it\'s better to place the replica of a given partition on different nodes.
+
+```
+{
+  "id" : "myIndex",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+  },
+ "mapFields" : {
+    "myIndex_0" : {
+      "N1" : "ONLINE",
+      "N2" : "ONLINE"
+    },
+    "myIndex_1" : {
+      "N2" : "ONLINE",
+      "N3" : "ONLINE"
+    },
+    "myIndex_2" : {
+      "N3" : "ONLINE",
+      "N1" : "ONLINE"
+    }
+  }
+}
+```
+
+### State 
+
+Now let\'s take a slightly more complicated scenario where a task represents a database.  Unlike an index which is in general read-only, a database supports both reads and writes. Keeping the data consistent among the replicas is crucial in distributed data stores. One commonly applied technique is to assign one replica as the MASTER and remaining replicas as SLAVEs. All writes go to the MASTER and are then replicated to the SLAVE replicas.
+
+Helix allows one to assign different states to each replica. Let\'s say you have two MySQL instances N1 and N2, where one will serve as MASTER and another as SLAVE. The IdealState can be changed to:
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+
+### State Machine and Transitions
+
+IdealState allows one to exactly specify the desired state of the cluster. Given an IdealState, Helix takes up the responsibility of ensuring that the cluster reaches the IdealState.  The Helix _controller_ reads the IdealState and then commands each Participant to take appropriate actions to move from one state to another until it matches the IdealState.  These actions are referred to as _transitions_ in Helix.
+
+The next logical question is:  how does the _controller_ compute the transitions required to get to IdealState?  This is where the finite state machine concept comes in. Helix allows applications to plug in a finite state machine.  A state machine consists of the following:
+
+* State: Describes the role of a replica
+* Transition: An action that allows a replica to move from one state to another, thus changing its role.
+
+Here is an example of MasterSlave state machine:
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+Helix allows each resource to be associated with one state machine. This means you can have one resource as an index and another as a database in the same cluster. One can associate each resource with a state machine as follows:
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+### Current State
+
+CurrentState of a resource simply represents its actual state at a Participant. In the below example:
+
+* INSTANCE_NAME: Unique name representing the process
+* SESSION_ID: ID that is automatically assigned every time a process joins the cluster
+
+```
+{
+  "id":"MyResource"
+  ,"simpleFields":{
+    ,"SESSION_ID":"13d0e34675e0002"
+    ,"INSTANCE_NAME":"node1"
+    ,"STATE_MODEL_DEF":"MasterSlave"
+  }
+  ,"mapFields":{
+    "MyResource_0":{
+      "CURRENT_STATE":"SLAVE"
+    }
+    ,"MyResource_1":{
+      "CURRENT_STATE":"MASTER"
+    }
+    ,"MyResource_2":{
+      "CURRENT_STATE":"MASTER"
+    }
+  }
+}
+```
+Each node in the cluster has its own CurrentState.
+
+### External View
+
+In order to communicate with the Participants, external clients need to know the current state of each of the Participants. The external clients are referred to as Spectators. In order to make the life of Spectator simple, Helix provides an ExternalView that is an aggregated view of the current state across all nodes. The ExternalView has a similar format as IdealState.
+
+```
+{
+  "id":"MyResource",
+  "mapFields":{
+    "MyResource_0":{
+      "N1":"SLAVE",
+      "N2":"MASTER",
+      "N3":"OFFLINE"
+    },
+    "MyResource_1":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"ERROR"
+    },
+    "MyResource_2":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"SLAVE"
+    }
+  }
+}
+```
+
+### Rebalancer
+
+The core component of Helix is the Controller which runs the Rebalancer algorithm on every cluster event. Cluster events can be one of the following:
+
+* Nodes start/stop and soft/hard failures
+* New nodes are added/removed
+* Ideal state changes
+
+There are few more examples such as configuration changes, etc.  The key takeaway: there are many ways to trigger the rebalancer.
+
+When a rebalancer is run it simply does the following:
+
+* Compares the IdealState and current state
+* Computes the transitions required to reach the IdealState
+* Issues the transitions to each Participant
+
+The above steps happen for every change in the system. Once the current state matches the IdealState, the system is considered stable which implies \'IdealState = CurrentState = ExternalView\'
+
+### Dynamic IdealState
+
+One of the things that makes Helix powerful is that IdealState can be changed dynamically. This means one can listen to cluster events like node failures and dynamically change the ideal state. Helix will then take care of triggering the respective transitions in the system.
+
+Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/Features.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Features.md b/site-releases/0.6.2-incubating/src/site/markdown/Features.md
new file mode 100644
index 0000000..ba9d0e7
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Features.md
@@ -0,0 +1,313 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Features</title>
+</head>
+
+Features
+----------------------------
+
+
+### CONFIGURING IDEALSTATE
+
+
+Read concepts page for definition of Idealstate.
+
+The placement of partitions in a DDS is very critical for reliability and scalability of the system. 
+For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can guarantee this.
+Helix by default comes with a variant of consistent hashing based of the RUSH algorithm. 
+
+This means given a number of partitions, replicas and number of nodes Helix does the automatic assignment of partition to nodes such that
+
+* Each node has the same number of partitions and replicas of the same partition do not stay on the same node.
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above two criteria.
+
+
+Helix provides multiple ways to control the placement and state of a replica. 
+
+```
+
+            |AUTO REBALANCE|   AUTO     |   CUSTOM  |       
+            -----------------------------------------
+   LOCATION | HELIX        |  APP       |  APP      |
+            -----------------------------------------
+      STATE | HELIX        |  HELIX     |  APP      |
+            -----------------------------------------
+```
+
+#### HELIX EXECUTION MODE 
+
+
+Idealstate is defined as the state of the DDS when all nodes are up and running and healthy. 
+Helix uses this as the target state of the system and computes the appropriate transitions needed in the system to bring it to a stable state. 
+
+Helix supports 3 different execution modes which allows application to explicitly control the placement and state of the replica.
+
+##### AUTO_REBALANCE
+
+When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. Example
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will internally compute the ideal state as 
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently alive processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+When one node fails Helix redistributes its 15 tasks to the remaining 3 nodes. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node. 
+
+#### AUTO
+
+When the idealstate mode is set to AUTO, Helix only controls STATE of the replicas where as the location of the partition is controlled by application. Example: The below idealstate indicates thats 'MyResource_0' must be only on node1 and node2.  But gives the control of assigning the STATE to Helix.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0 in N2 based on the system constraints. For example, if a system constraint specified that there should be 1 Master and if the Master failed, then node2 will be made the new master. 
+
+#### CUSTOM
+
+Helix offers a third mode called CUSTOM, in which application can completely control the placement and state of each replica. Applications will have to implement an interface that Helix will invoke when the cluster state changes. 
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that Idealstate and Currentstate converges.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+      "IDEAL_STATE_MODE" : "CUSTOM",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+For example, the current state of the system might be 'MyResource_0' -> {N1:MASTER,N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel since it might result in a transient state where both N1 and N2 are masters.
+Helix will first issue MASTER-->SLAVE to N1 and after its completed it will issue SLAVE-->MASTER to N2. 
+ 
+
+### State Machine Configuration
+
+Helix comes with 3 default state models that are most commonly used. Its possible to have multiple state models in a cluster. 
+Every resource that is added should have a reference to the state model. 
+
+* MASTER-SLAVE: Has 3 states OFFLINE,SLAVE,MASTER. Max masters is 1. Slaves will be based on the replication factor. Replication factor can be specified while adding the resource
+* ONLINE-OFFLINE: Has 2 states OFFLINE and ONLINE. Very simple state model and most applications start off with this state model.
+* LEADER-STANDBY:1 Leader and many stand bys. In general the standby's are idle.
+
+Apart from providing the state machine configuration, one can specify the constraints of states and transitions.
+
+For example one can say
+Master:1. Max number of replicas in Master state at any time is 1.
+OFFLINE-SLAVE:5 Max number of Offline-Slave transitions that can happen concurrently in the system
+
+STATE PRIORITY
+Helix uses greedy approach to satisfy the state constraints. For example if the state machine configuration says it needs 1 master and 2 slaves but only 1 node is active, Helix must promote it to master. This behavior is achieved by providing the state priority list as MASTER,SLAVE.
+
+STATE TRANSITION PRIORITY
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. 
+One can control this by overriding the priority order.
+ 
+### Config management
+
+Helix allows applications to store application specific properties. The configuration can have different scopes.
+
+* Cluster
+* Node specific
+* Resource specific
+* Partition specific
+
+Helix also provides notifications when any configs are changed. This allows applications to support dynamic configuration changes.
+
+See HelixManager.getConfigAccessor for more info
+
+### Intra cluster messaging api
+
+This is an interesting feature which is quite useful in practice. Often times, nodes in DDS requires a mechanism to interact with each other. One such requirement is a process of bootstrapping a replica.
+
+Consider a search system use case where the index replica starts up and it does not have an index. One of the commonly used solutions is to get the index from a common location or to copy the index from another replica.
+Helix provides a messaging api, that can be used to talk to other nodes in the system. The value added that Helix provides here is, message recipient can be specified in terms of resource, 
+partition, state and Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of P1. 
+Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic api and can also be used to schedule various periodic tasks in the cluster like data backups etc. 
+System Admins can also perform adhoc tasks like on demand backup or execute a system command(like rm -rf ;-)) across all nodes.
+
+```
+      ClusterMessagingService messagingService = manager.getMessagingService();
+      //CONSTRUCT THE MESSAGE
+      Message requestBackupUriRequest = new Message(
+          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+      requestBackupUriRequest
+          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+      requestBackupUriRequest.setMsgState(MessageState.NEW);
+      //SET THE RECIPIENT CRITERIA, All nodes that satisfy the criteria will receive the message
+      Criteria recipientCriteria = new Criteria();
+      recipientCriteria.setInstanceName("%");
+      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      recipientCriteria.setResource("MyDB");
+      recipientCriteria.setPartition("");
+      //Should be processed only the process that is active at the time of sending the message. 
+      //This means if the recipient is restarted after message is sent, it will not be processed.
+      recipientCriteria.setSessionSpecific(true);
+      // wait for 30 seconds
+      int timeout = 30000;
+      //The handler that will be invoked when any recipient responds to the message.
+      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+      //This will return only after all recipients respond or after timeout.
+      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+          requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.getMessagingService for more info.
+
+
+### Application specific property storage
+
+There are several usecases where applications needs support for distributed data structures. Helix uses Zookeeper to store the application data and hence provides notifications when the data changes. 
+One value add Helix provides is the ability to specify cache the data and also write through cache. This is more efficient than reading from ZK every time.
+
+See HelixManager.getHelixPropertyStore
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight but some might involve moving data around which is quite expensive.
+Helix allows applications to set threshold on transitions. The threshold can be set at the multiple scopes.
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per node max transitions in parallel.
+
+See HelixManager.getHelixAdmin.addMessageConstraint() 
+
+### Health monitoring and alerting
+
+This in currently in development mode, not yet productionized.
+
+Helix provides ability for each node in the system to report health metrics on a periodic basis. 
+Helix supports multiple ways to aggregate these metrics like simple SUM, AVG, EXPONENTIAL DECAY, WINDOW. Helix will only persist the aggregated value.
+Applications can define threshold on the aggregate values according to the SLA's and when the SLA is violated Helix will fire an alert. 
+Currently Helix only fires an alert but eventually we plan to use this metrics to either mark the node dead or load balance the partitions. 
+This feature will be valuable in for distributed systems that support multi-tenancy and have huge variation in work load patterns. Another place this can be used is to detect skewed partitions and rebalance the cluster.
+
+This feature is not yet stable and do not recommend to be used in production.
+
+
+### Controller deployment modes
+
+Read Architecture wiki for more details on the Role of a controller. In simple words, it basically controls the participants in the cluster by issuing transitions.
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+Controller can be started as a separate process to manage a cluster. This is the recommended approach. How ever since one controller can be a single point of failure, multiple controller processes are required for reliability.
+Even if multiple controllers are running only one will be actively managing the cluster at any time and is decided by a leader election process. If the leader fails, another leader will resume managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participant. 
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool feature we added in helix was use a set of controllers to manage a large number of clusters. 
+For example if you have X clusters to be managed, instead of deploying X*3(3 controllers for fault tolerance) controllers for each cluster, one can deploy only 3 controllers. Each controller can manage X/3 clusters. 
+If any controller fails the remaining two will manage X/2 clusters. At LinkedIn, we always deploy controllers in this mode. 
+
+
+
+
+
+
+
+ 


[07/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_accessors.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_accessors.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_accessors.md
new file mode 100644
index 0000000..b431710
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_accessors.md
@@ -0,0 +1,125 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Logical Accessors</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Logical Accessors
+
+Helix constructs follow a logical hierarchy. A cluster contains participants, and serve logical resources. Each resource can be divided into partitions, which themselves can be replicated. Helix now supports configuring and modifying clusters programmatically in a hierarchical way using logical accessors.
+
+[Click here](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/api/accessor/package-summary.html) for the Javadocs of the accessors.
+
+### An Example
+
+#### Configure a Participant
+
+A participant is a combination of a host, port, and a UserConfig. A UserConfig is an arbitrary set of properties a Helix user can attach to any participant.
+
+```
+ParticipantId participantId = ParticipantId.from("localhost_12345");
+ParticipantConfig participantConfig = new ParticipantConfig.Builder(participantId)
+    .hostName("localhost").port(12345).build();
+```
+
+#### Configure a Resource
+
+##### RebalancerContext
+A Resource is essentially a combination of a RebalancerContext and a UserConfig. A [RebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/RebalancerContext.html) consists of all the key properties required to rebalance a resource, including how it is partitioned and replicated, and what state model it follows. Most Helix resources will make use of a [PartitionedRebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/PartitionedRebalancerContext.html), which is a RebalancerContext for resources that are partitioned.
+
+Recall that there are four [rebalancing modes](./tutorial_rebalance.html) that Helix provides, and so Helix also provides the following subclasses for PartitionedRebalancerContext:
+
+* [FullAutoRebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/FullAutoRebalancerContext.html) for FULL_AUTO mode.
+* [SemiAutoRebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/SemiAutoRebalancerContext.html) for SEMI_AUTO mode. This class allows a user to specify "preference lists" to indicate where each partition should ideally be served
+* [CustomRebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/CustomRebalancerContext.html) for CUSTOMIZED mode. This class allows a user tp specify "preference maps" to indicate the location and state for each partition replica.
+
+Helix also supports arbitrary subclasses of PartitionedRebalancerContext and even arbitrary implementations of RebalancerContext for applications that need a user-defined approach for rebalancing. For more, see [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
+
+##### In Action
+
+Here is an example of a configured resource with a rebalancer context for FULL_AUTO mode and two partitions:
+
+```
+ResourceId resourceId = ResourceId.from("sampleResource");
+StateModelDefinition stateModelDef = getStateModelDef();
+Partition partition1 = new Partition(PartitionId.from(resourceId, "1"));
+Partition partition2 = new Partition(PartitionId.from(resourceId, "2"));
+FullAutoRebalancerContext rebalanceContext =
+    new FullAutoRebalancerContext.Builder(resourceId).replicaCount(1).addPartition(partition1)
+        .addPartition(partition2).stateModelDefId(stateModelDef.getStateModelDefId()).build();
+ResourceConfig resourceConfig =
+    new ResourceConfig.Builder(resourceId).rebalancerContext(rebalanceContext).build();
+```
+
+#### Add the Cluster
+
+Now we can take the participant and resource configured above, add them to a cluster configuration, and then persist the entire cluster at once using a ClusterAccessor:
+
+```
+// configure the cluster
+ClusterId clusterId = ClusterId.from("sampleCluster");
+ClusterConfig clusterConfig = new ClusterConfig.Builder(clusterId).addParticipant(participantConfig)
+    .addResource(resourceConfig).addStateModelDefinition(stateModelDef).build();
+
+// create the cluster using a ClusterAccessor
+HelixConnection connection = new ZkHelixConnection(zkAddr);
+connection.connect();
+ClusterAccessor clusterAccessor = connection.createClusterAccessor(clusterId);
+clusterAccessor.createCluster(clusterConfig);
+```
+
+### Create, Read, Update, and Delete
+
+Note that you don't have to specify the entire cluster beforehand! Helix provides a ClusterAccessor, ParticipantAccessor, and ResourceAccessor to allow changing as much or as little of the cluster as needed on the fly. You can add a resource or participant to a cluster, reconfigure a resource, participant, or cluster, remove components from the cluster, and more. See the [Javadocs](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/api/accessor/package-summary.html) to see all that the accessor classes can do.
+
+#### Delta Classes
+
+Updating a cluster, participant, or resource should involve selecting the element to change, and then letting Helix change only that component. To do this, Helix has included Delta classes for ClusterConfig, ParticipantConfig, and ResourceConfig.
+
+#### Example: Updating a Participant
+
+Tags are used for Helix depolyments where only certain participants can be allowed to serve certain resources. To do this, Helix only assigns resource replicas to participants who have a tag that the resource specifies. In this example, we will use ParticipantConfig.Delta to remove a participant tag and add another as part of a reconfiguration.
+
+```
+// specify the change to the participant
+ParticipantConfig.Delta delta = new ParticipantConfig.Delta(participantId).addTag("newTag").removeTag("oldTag");
+
+// update the participant configuration
+ParticipantAccessor participantAccessor = connection.createParticipantAccessor(clusterId);
+participantAccessor.updateParticipant(participantId, delta);
+```
+
+#### Example: Dropping a Resource
+Removing a resource from the cluster is quite simple:
+
+```
+clusterAccessor.dropResourceFromCluster(resourceId);
+```
+
+#### Example: Reading the Cluster
+Reading a full snapshot of the cluster is also a one-liner:
+
+```
+Cluster cluster = clusterAccessor.readCluster();
+```
+
+### Atomic Accessors
+
+Helix also includes versions of ClusterAccessor, ParticipantAccessor, and ResourceAccessor that can complete operations atomically relative to one another. The specific semantics of the atomic operations are included in the Javadocs. These atomic classes should be used sparingly and only in cases where contention can adversely affect the correctness of a Helix-based cluster. For most deployments, this is not the case, and using these classes will cause a degradation in performance. However, the interface for all atomic accessors mirrors that of the non-atomic accessors.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_admin.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_admin.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_admin.md
new file mode 100644
index 0000000..3285ad9
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_admin.md
@@ -0,0 +1,407 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Admin Operations</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Admin Operations
+
+Helix provides a set of admin api for cluster management operations. They are supported via:
+
+* _Java API_
+* _Commandline interface_
+* _REST interface via helix-admin-webapp_
+
+### Java API
+See interface [_org.apache.helix.HelixAdmin_](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/HelixAdmin.html)
+
+### Command-line interface
+The command-line tool comes with helix-core package:
+
+Get the command-line tool:
+
+``` 
+  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+  - cd incubator-helix
+  - ./build
+  - cd helix-core/target/helix-core-pkg/bin
+  - chmod +x *.sh
+```
+
+Get help:
+
+```
+  - ./helix-admin.sh --help
+```
+
+All other commands have this form:
+
+```
+  ./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
+```
+
+Admin commands and brief description:
+
+| Command syntax | Description |
+| -------------- | ----------- |
+| _\-\-activateCluster \<clusterName controllerCluster true/false\>_ | Enable/disable a cluster in distributed controller mode |
+| _\-\-addCluster \<clusterName\>_ | Add a new cluster |
+| _\-\-addIdealState \<clusterName resourceName fileName.json\>_ | Add an ideal state to a cluster |
+| _\-\-addInstanceTag \<clusterName instanceName tag\>_ | Add a tag to an instance |
+| _\-\-addNode \<clusterName instanceId\>_ | Add an instance to a cluster |
+| _\-\-addResource \<clusterName resourceName partitionNumber stateModelName\>_ | Add a new resource to a cluster |
+| _\-\-addResourceProperty \<clusterName resourceName propertyName propertyValue\>_ | Add a resource property |
+| _\-\-addStateModelDef \<clusterName fileName.json\>_ | Add a State model definition to a cluster |
+| _\-\-dropCluster \<clusterName\>_ | Delete a cluster |
+| _\-\-dropNode \<clusterName instanceId\>_ | Remove a node from a cluster |
+| _\-\-dropResource \<clusterName resourceName\>_ | Remove an existing resource from a cluster |
+| _\-\-enableCluster \<clusterName true/false\>_ | Enable/disable a cluster |
+| _\-\-enableInstance \<clusterName instanceId true/false\>_ | Enable/disable an instance |
+| _\-\-enablePartition \<true/false clusterName nodeId resourceName partitionName\>_ | Enable/disable a partition |
+| _\-\-getConfig \<configScope configScopeArgs configKeys\>_ | Get user configs |
+| _\-\-getConstraints \<clusterName constraintType\>_ | Get constraints |
+| _\-\-help_ | print help information |
+| _\-\-instanceGroupTag \<instanceTag\>_ | Specify instance group tag, used with rebalance command |
+| _\-\-listClusterInfo \<clusterName\>_ | Show information of a cluster |
+| _\-\-listClusters_ | List all clusters |
+| _\-\-listInstanceInfo \<clusterName instanceId\>_ | Show information of an instance |
+| _\-\-listInstances \<clusterName\>_ | List all instances in a cluster |
+| _\-\-listPartitionInfo \<clusterName resourceName partitionName\>_ | Show information of a partition |
+| _\-\-listResourceInfo \<clusterName resourceName\>_ | Show information of a resource |
+| _\-\-listResources \<clusterName\>_ | List all resources in a cluster |
+| _\-\-listStateModel \<clusterName stateModelName\>_ | Show information of a state model |
+| _\-\-listStateModels \<clusterName\>_ | List all state models in a cluster |
+| _\-\-maxPartitionsPerNode \<maxPartitionsPerNode\>_ | Specify the max partitions per instance, used with addResourceGroup command |
+| _\-\-rebalance \<clusterName resourceName replicas\>_ | Rebalance a resource |
+| _\-\-removeConfig \<configScope configScopeArgs configKeys\>_ | Remove user configs |
+| _\-\-removeConstraint \<clusterName constraintType constraintId\>_ | Remove a constraint |
+| _\-\-removeInstanceTag \<clusterName instanceId tag\>_ | Remove a tag from an instance |
+| _\-\-removeResourceProperty \<clusterName resourceName propertyName\>_ | Remove a resource property |
+| _\-\-resetInstance \<clusterName instanceId\>_ | Reset all erroneous partitions on an instance |
+| _\-\-resetPartition \<clusterName instanceId resourceName partitionName\>_ | Reset an erroneous partition |
+| _\-\-resetResource \<clusterName resourceName\>_ | Reset all erroneous partitions of a resource |
+| _\-\-setConfig \<configScope configScopeArgs configKeyValueMap\>_ | Set user configs |
+| _\-\-setConstraint \<clusterName constraintType constraintId constraintKeyValueMap\>_ | Set a constraint |
+| _\-\-swapInstance \<clusterName oldInstance newInstance\>_ | Swap an old instance with a new instance |
+| _\-\-zkSvr \<ZookeeperServerAddress\>_ | Provide zookeeper address |
+
+### REST interface
+
+The REST interface comes wit helix-admin-webapp package:
+
+``` 
+  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+  - cd incubator-helix 
+  - ./build
+  - cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
+  - chmod +x *.sh
+  - ./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure zookeeper is running
+```
+
+#### URL and support methods
+
+* _/clusters_
+    * List all clusters
+
+    ```
+      curl http://localhost:8100/clusters
+    ```
+
+    * Add a cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
+    ```
+
+* _/clusters/{clusterName}_
+    * List cluster information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster
+    ```
+
+    * Enable/disable a cluster in distributed controller mode
+    
+    ```
+      curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
+    ```
+
+    * Remove a cluster
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster
+    ```
+    
+* _/clusters/{clusterName}/resourceGroups_
+    * List all resources in a cluster
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups
+    ```
+    
+    * Add a resource to cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
+    ```
+
+* _/clusters/{clusterName}/resourceGroups/{resourceName}_
+    * List resource information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+    
+    * Drop a resource
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+
+    * Reset all erroneous partitions of a resource
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+
+* _/clusters/{clusterName}/resourceGroups/{resourceName}/idealState_
+    * Rebalance a resource
+    
+    ```
+      curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+
+    * Add an ideal state
+    
+    ```
+    echo jsonParameters={
+    "command":"addIdealState"
+       }&newIdealState={
+      "id" : "MyDB",
+      "simpleFields" : {
+        "IDEAL_STATE_MODE" : "AUTO",
+        "NUM_PARTITIONS" : "8",
+        "REBALANCE_MODE" : "SEMI_AUTO",
+        "REPLICAS" : "0",
+        "STATE_MODEL_DEF_REF" : "MasterSlave",
+        "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+      },
+      "listFields" : {
+      },
+      "mapFields" : {
+        "MyDB_0" : {
+          "localhost_1001" : "MASTER",
+          "localhost_1002" : "SLAVE"
+        }
+      }
+    }
+    > newIdealState.json
+    curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+    
+    * Add resource property
+    
+    ```
+      curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+    
+* _/clusters/{clusterName}/resourceGroups/{resourceName}/externalView_
+    * Show resource external view
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
+    ```
+* _/clusters/{clusterName}/instances_
+    * List all instances
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/instances
+    ```
+
+    * Add an instance
+    
+    ```
+    curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    ```
+    
+    * Swap an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    ```
+* _/clusters/{clusterName}/instances/{instanceName}_
+    * Show instance information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Enable/disable an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+    * Drop an instance
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Disable/enable partitions on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Reset an erroneous partition on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+    * Reset all erroneous partitions on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+* _/clusters/{clusterName}/configs_
+    * Get user cluster level config
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+    
+    * Set user cluster level config
+    
+    ```
+      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+
+    * Remove user cluster level config
+    
+    ```
+    curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+    
+    * Get/set/remove user participant level config
+    
+    ```
+      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
+    ```
+    
+    * Get/set/remove resource level config
+    
+    ```
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
+    ```
+
+* _/clusters/{clusterName}/controller_
+    * Show controller information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/Controller
+    ```
+    
+    * Enable/disable cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
+    ```
+
+* _/zkPath/{path}_
+    * Get information for zookeeper path
+    
+    ```
+      curl http://localhost:8100/zkPath/MyCluster
+    ```
+
+* _/clusters/{clusterName}/StateModelDefs_
+    * Show all state model definitions
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/StateModelDefs
+    ```
+
+    * Add a state mdoel definition
+    
+    ```
+      echo jsonParameters={
+        "command":"addStateModelDef"
+       }&newStateModelDef={
+          "id" : "OnlineOffline",
+          "simpleFields" : {
+            "INITIAL_STATE" : "OFFLINE"
+          },
+          "listFields" : {
+            "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
+            "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
+          },
+          "mapFields" : {
+            "DROPPED.meta" : {
+              "count" : "-1"
+            },
+            "OFFLINE.meta" : {
+              "count" : "-1"
+            },
+            "OFFLINE.next" : {
+              "DROPPED" : "DROPPED",
+              "ONLINE" : "ONLINE"
+            },
+            "ONLINE.meta" : {
+              "count" : "R"
+            },
+            "ONLINE.next" : {
+              "DROPPED" : "OFFLINE",
+              "OFFLINE" : "OFFLINE"
+            }
+          }
+        }
+        > newStateModelDef.json
+        curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
+    ```
+
+* _/clusters/{clusterName}/StateModelDefs/{stateModelDefName}_
+    * Show a state model definition
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
+    ```
+
+* _/clusters/{clusterName}/constraints/{constraintType}_
+    * Show all contraints
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
+    ```
+
+    * Set a contraint
+    
+    ```
+       curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    ```
+    
+    * Remove a constraint
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    ```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_controller.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_controller.md
new file mode 100644
index 0000000..1a4cc45
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_controller.md
@@ -0,0 +1,79 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Controller</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Controller
+
+Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
+
+### Start the Helix Agent
+
+
+It requires the following parameters:
+ 
+* clusterId: A logical ID to represent the group of nodes
+* controllerId: A logical ID of the process creating the controller instance. Generally this is host:port.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+
+```
+HelixConnection connection = new ZKHelixConnection(zkConnectString);
+HelixController controller = connection.createController(clusterId, controllerId);
+```
+
+### Controller Code
+
+The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
+If you need additional functionality, see GenericHelixController and ZKHelixController for how to configure the pipeline.
+
+```
+HelixConnection connection = new ZKHelixConnection(zkConnectString);
+HelixController controller = connection.createController(clusterId, controllerId);
+controller.startAsync();
+```
+The snippet above shows how the controller is started. You can also start the controller using command line interface.
+  
+```
+cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+```
+
+### Controller deployment modes
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters. 
+
+For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_health.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_health.md
new file mode 100644
index 0000000..e1a7f3c
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_health.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Customizing Heath Checks</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Customizing Health Checks
+
+In this chapter, we\'ll learn how to customize the health check, based on metrics of your distributed system.  
+
+### Health Checks
+
+Note: _this in currently in development mode, not yet ready for production._
+
+Helix provides the ability for each node in the system to report health metrics on a periodic basis. 
+
+Helix supports multiple ways to aggregate these metrics:
+
+* SUM
+* AVG
+* EXPONENTIAL DECAY
+* WINDOW
+
+Helix persists the aggregated value only.
+
+Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert. 
+Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
+This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_messaging.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_messaging.md
new file mode 100644
index 0000000..f65ce7c
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_messaging.md
@@ -0,0 +1,71 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Messaging</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Messaging
+
+In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature which is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.  
+
+### Example: Bootstrapping a Replica
+
+Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
+
+Helix provides a messaging API for intra-cluster communication between nodes in the system.  Helix provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
+Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
+System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
+
+```
+      ClusterMessagingService messagingService = manager.getMessagingService();
+
+      // Construct the Message
+      Message requestBackupUriRequest = new Message(
+          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+      requestBackupUriRequest
+          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+      requestBackupUriRequest.setMsgState(MessageState.NEW);
+
+      // Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
+      Criteria recipientCriteria = new Criteria();
+      recipientCriteria.setInstanceName("%");
+      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      recipientCriteria.setResource("MyDB");
+      recipientCriteria.setPartition("");
+
+      // Should be processed only by process(es) that are active at the time of sending the message
+      //   This means if the recipient is restarted after message is sent, it will not be processe.
+      recipientCriteria.setSessionSpecific(true);
+
+      // wait for 30 seconds
+      int timeout = 30000;
+
+      // the handler that will be invoked when any recipient responds to the message.
+      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+
+      // this will return only after all recipients respond or after timeout
+      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+          requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.DefaultMessagingService in [Javadocs](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_participant.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_participant.md
new file mode 100644
index 0000000..da55cbd
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_participant.md
@@ -0,0 +1,97 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Participant</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Participant
+
+In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
+
+
+### Start the Helix Agent
+
+The Helix agent is a common component that connects each system component with the controller.
+
+It requires the following parameters:
+ 
+* clusterId: A logical ID to represent the group of nodes
+* participantId: A logical ID of the process creating the manager instance. Generally this is host:port.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+
+After the Helix participant instance is created, only thing that needs to be registered is the state model factory. 
+The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
+
+* MasterSlaveStateModelFactory
+* LeaderStandbyStateModelFactory
+* BootstrapHandler
+* _An application defined state model factory_
+
+
+```
+HelixConnection connection = new ZKHelixConnection(zkConnectString);
+HelixParticipant participant = connection.createParticipant(clusterId, participantId);
+StateMachineEngine stateMach = participant.getStateMachineEngine();
+
+// create a stateModelFactory that returns a statemodel object for each partition. 
+StateModelFactory<StateModel> stateModelFactory = new OnlineOfflineStateModelFactory();     
+stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
+participant.startAsync();
+```
+
+Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
+
+```
+public class OnlineOfflineStateModelFactory extends StateModelFactory<StateModel> {
+  @Override
+  public StateModel createNewStateModel(String stateUnitKey) {
+    OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
+    return stateModel;
+  }
+  @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
+  public static class OnlineOfflineStateModel extends StateModel {
+
+    @Transition(from = "OFFLINE", to = "ONLINE")
+    public void onBecomeOnlineFromOffline(Message message,
+        NotificationContext context) {
+
+      System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might start a service, run initialization, etc                            //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+    }
+
+    @Transition(from = "ONLINE", to = "OFFLINE")
+    public void onBecomeOfflineFromOnline(Message message,
+        NotificationContext context) {
+
+      System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
+
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+      // Application logic to handle transition                                                     //
+      // For example, you might shutdown a service, log this event, or change monitoring settings   //
+      ////////////////////////////////////////////////////////////////////////////////////////////////
+    }
+  }
+}
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_propstore.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_propstore.md
new file mode 100644
index 0000000..41bcc69
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_propstore.md
@@ -0,0 +1,34 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Application Property Store</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Application Property Store
+
+In this chapter, we\'ll learn how to use the application property store.
+
+### Property Store
+
+It is common that an application needs support for distributed, shared data structures.  Helix uses Zookeeper to store the application data and hence provides notifications when the data changes.
+
+While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
+
+See [HelixManager.getHelixPropertyStore](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_rebalance.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_rebalance.md
new file mode 100644
index 0000000..8f42a5a
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_rebalance.md
@@ -0,0 +1,181 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Rebalancing Algorithms</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
+
+The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
+
+This means given a number of partitions, replicas and number of nodes, Helix does the automatic assignment of partition to nodes such that:
+
+* Each node has the same number of partitions
+* Replicas of the same partition do not stay on the same node
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above criteria
+
+Helix employs a rebalancing algorithm to compute the _ideal state_ of the system.  When the _current state_ differs from the _ideal state_, Helix uses it as the target state of the system and computes the appropriate transitions needed to bring it to the _ideal state_.
+
+Helix makes it easy to perform this operation, while giving you control over the algorithm.  In this section, we\'ll see how to implement the desired behavior.
+
+Helix has four options for rebalancing, in increasing order of customization by the system builder:
+
+* FULL_AUTO
+* SEMI_AUTO
+* CUSTOMIZED
+* USER_DEFINED
+
+```
+            |FULL_AUTO     |  SEMI_AUTO | CUSTOMIZED|  USER_DEFINED  |
+            ---------------------------------------------------------|
+   LOCATION | HELIX        |  APP       |  APP      |      APP       |
+            ---------------------------------------------------------|
+      STATE | HELIX        |  HELIX     |  APP      |      APP       |
+            ----------------------------------------------------------
+```
+
+
+### FULL_AUTO
+
+When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. 
+
+For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "FULL_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally.  The ideal state is therefore:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node.. 
+
+#### SEMI_AUTO
+
+When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
+
+Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2.  The choice of _state_ is still controlled by Helix.  That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+
+The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
+
+In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints. 
+
+#### CUSTOMIZED
+
+Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes. 
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
+
+Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "CUSTOMIZED",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2. 
+
+#### USER_DEFINED
+
+For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
+
+#### Backwards Compatibility
+
+In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_spectator.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_spectator.md
new file mode 100644
index 0000000..24c1cf4
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_spectator.md
@@ -0,0 +1,76 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Spectator</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Spectator
+
+Next, we\'ll learn how to implement a Spectator.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
+
+### Start the Helix agent
+
+Same as for a Participant, The Helix agent is the common component that connects each system component with the controller.
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
+    * PARTICIPANT: Process that performs the actual task in the distributed system.
+    * SPECTATOR: Process that observes the changes in the cluster.
+    * ADMIN: To carry out system admin actions.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
+
+After the Helix manager instance is created, only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
+
+### Spectator Code
+
+A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
+Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.PARTICIPANT,
+                                                          zkConnectString);
+manager.connect();
+RoutingTableProvider routingTableProvider = new RoutingTableProvider();
+manager.addExternalViewChangeListener(routingTableProvider);
+```
+
+In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
+
+```
+## instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
+instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
+
+////////////////////////////////////////////////////////////////////////////////////////////////
+// Application-specific code to send a request to one of the instances                        //
+////////////////////////////////////////////////////////////////////////////////////////////////
+
+theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
+result = theInstance.sendRequest(yourApplicationRequest, responseObject);
+
+```
+
+When the external view changes, the application needs to react by sending requests to a different instance.  
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_state.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_state.md
new file mode 100644
index 0000000..4f7b1b5
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_state.md
@@ -0,0 +1,131 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - State Machine Configuration</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): State Machine Configuration
+
+In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
+
+## State Models
+
+Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster. 
+Every resource that is added should be configured to use a state model that govern its _ideal state_.
+
+### MASTER-SLAVE
+
+* 3 states: OFFLINE, SLAVE, MASTER
+* Maximum number of masters: 1
+* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
+
+
+### ONLINE-OFFLINE
+
+* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
+
+### LEADER-STANDBY
+
+* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
+
+## Constraints
+
+In addition to the state machine configuration, one can specify the constraints of states and transitions.
+
+For example, one can say:
+
+* MASTER:1
+<br/>Maximum number of replicas in MASTER state at any time is 1
+
+* OFFLINE-SLAVE:5 
+<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
+
+### Dynamic State Constraints
+
+We also support two dynamic upper bounds for the number of replicas in each state:
+
+* N: The number of replicas in the state is at most the number of live participants in the cluster
+* R: The number of replicas in the state is at most the specified replica count for the partition
+
+### State Priority
+
+Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
+
+### State Transition Priority
+
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
+
+## Special States
+
+### DROPPED
+
+The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
+
+* The DROPPED state must be defined
+* There must be a path to DROPPED for every state in the model
+
+### ERROR
+
+The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
+
+## Annotated Example
+
+Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
+
+```
+StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
+  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
+  .initialState("OFFLINE")
+
+  // Lowest number here indicates highest priority, no value indicates lowest priority
+  .addState("MASTER", 1)
+  .addState("SLAVE", 2)
+  .addState("OFFLINE")
+
+  // Note the special inclusion of the DROPPED state (REQUIRED)
+  .addState(HelixDefinedState.DROPPED.toString())
+
+  // No more than one master allowed
+  .upperBound("MASTER", 1)
+
+  // R indicates an upper bound of number of replicas for each partition
+  .dynamicUpperBound("SLAVE", "R")
+
+  // Add some high-priority transitions
+  .addTransition("SLAVE", "MASTER", 1)
+  .addTransition("OFFLINE", "SLAVE", 2)
+
+  // Using the same priority value indicates that these transitions can fire in any order
+  .addTransition("MASTER", "SLAVE", 3)
+  .addTransition("SLAVE", "OFFLINE", 3)
+
+  // Not specifying a value defaults to lowest priority
+  // Notice the inclusion of the OFFLINE to DROPPED transition
+  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
+  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
+
+  // Create the StateModelDefinition instance
+  .build();
+
+  // Use the isValid() function to make sure the StateModelDefinition will work without issues
+  Assert.assertTrue(stateModel.isValid());
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_throttling.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_throttling.md
new file mode 100644
index 0000000..7417979
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_throttling.md
@@ -0,0 +1,38 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Throttling</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Throttling
+
+In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge is capable of coordinating this decision.
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight, but some might involve moving data, which is quite expensive from a network and IOPS perspective.
+
+Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per-node maximum transitions in parallel
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_user_def_rebalancer.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_user_def_rebalancer.md
new file mode 100644
index 0000000..f30aafc
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_user_def_rebalancer.md
@@ -0,0 +1,227 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - User-Defined Rebalancing</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): User-Defined Rebalancing
+
+Even though Helix can compute both the location and the state of replicas internally using a default fully-automatic rebalancer, specific applications may require rebalancing strategies that optimize for different requirements. Thus, Helix allows applications to plug in arbitrary rebalancer algorithms that implement a provided interface. One of the main design goals of Helix is to provide maximum flexibility to any distributed application. Thus, it allows applications to fully implement the rebalancer, which is the core constraint solver in the system, if the application developer so chooses.
+
+Whenever the state of the cluster changes, as is the case when participants join or leave the cluster, Helix automatically calls the rebalancer to compute a new mapping of all the replicas in the resource. When using a pluggable rebalancer, the only required step is to register it with Helix. Subsequently, no additional bootstrapping steps are necessary. Helix uses reflection to look up and load the class dynamically at runtime. As a result, it is also technically possible to change the rebalancing strategy used at any time.
+
+The [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html) interface is as follows:
+
+```
+public void init(HelixManager helixManager);
+
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
+    ResourceCurrentState currentState);
+```
+The first parameter is a configuration of the resource to rebalance, the second is a full cache of all of the cluster data available to Helix, and the third is a snapshot of the actual current placements and state assignments. From the cluster variable, it is also possible to access the ResourceAssignment last generated by this rebalancer. Internally, Helix implements the same interface for its own rebalancing routines, so a user-defined rebalancer will be cognizant of the same information about the cluster as an internal implementation. Helix strives to provide applications the ability to implement algorithms that may require a large portion of the entire state of the cluster to make the best placement and state assignment decisions possible.
+
+A ResourceAssignment is a full representation of the location and the state of each replica of each partition of a given resource. This is a simple representation of the placement that the algorithm believes is the best possible. If the placement meets all defined constraints, this is what will become the actual state of the distributed system.
+
+### Rebalancer Context
+
+Helix provides an interface called [RebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/RebalancerContext.html). For each of the four main [rebalancing modes](./tutorial_rebalance.html), there is a base class called [PartitionedRebalancerContext](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/context/PartitionedRebalancerContext.html), which contains all of the basic properties required for a partitioned resource. Helix provides three derived classes for PartitionedRebalancerContext: FullAutoRebalancerContext, SemiAutoRebalancerContext, and CustomizedRebalancerContext. If none of these work for your application, you can create your own class that derives PartiitonedRebalancerContext (or even only implements RebalancerContext).
+
+### Specifying a Rebalancer
+
+#### Using Logical Accessors
+To specify the rebalancer, one can use ```PartitionedRebalancerContext#setRebalancerRef(RebalancerRef)``` to specify the specific implementation of the rebalancerClass. For example, here's a base constructed PartitionedRebalancerContext with a user-specified class:
+
+```
+RebalancerRef rebalancerRef = RebalancerRef.from(className);
+PartitionedRebalancerContext rebalanceContext =
+    new PartitionedRebalancerContext.Builder(resourceId).replicaCount(1).addPartition(partition1)
+        .addPartition(partition2).stateModelDefId(stateModelDef.getStateModelDefId())
+        .rebalancerRef(rebalancerRef).build();
+```
+
+The class name is a fully-qualified class name consisting of its package and its name, and the class should implement the Rebalancer interface. Now, the context can be added to a ResourceConfig through ```ResourceConfig.Builder#rebalancerContext(RebalancerContext)``` and the context will automatically be made available to the rebalancer for all subsequent executions.
+
+#### Using HelixAdmin
+For implementations that set up the cluster through existing code, the following HelixAdmin calls will update the Rebalancer class:
+
+```
+IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
+idealState.setRebalanceMode(RebalanceMode.USER_DEFINED);
+idealState.setRebalancerClassName(className);
+helixAdmin.setResourceIdealState(clusterName, resourceName, idealState);
+```
+There are two key fields to set to specify that a pluggable rebalancer should be used. First, the rebalance mode should be set to USER_DEFINED, and second the rebalancer class name should be set to a class that implements Rebalancer and is within the scope of the project. The class name is a fully-qualified class name consisting of its package and its name.
+
+#### Using YAML
+Alternatively, the rebalancer class name can be specified in a YAML file representing the cluster configuration. The requirements are the same, but the representation is more compact. Below are the first few lines of an example YAML file. To see a full YAML specification, see the [YAML tutorial](./tutorial_yaml.html).
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster
+resources:
+  - name: lock-group # unique resource name
+    rebalancer: # we will provide our own rebalancer
+      mode: USER_DEFINED
+      class: domain.project.helix.rebalancer.UserDefinedRebalancerClass
+...
+```
+
+### Example
+We demonstrate plugging in a simple user-defined rebalancer as part of a revisit of the [distributed lock manager](./recipes/user_def_rebalancer.html) example. It includes a functional Rebalancer implementation, as well as the entire YAML file used to define the cluster.
+
+Consider the case where partitions are locks in a lock manager and 6 locks are to be distributed evenly to a set of participants, and only one participant can hold each lock. We can define a rebalancing algorithm that simply takes the modulus of the lock number and the number of participants to evenly distribute the locks across participants. Helix allows capping the number of partitions a participant can accept, but since locks are lightweight, we do not need to define a restriction in this case. The following is a succinct implementation of this algorithm.
+
+```
+@Override
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
+    ResourceCurrentState currentState) {
+  // Get the rebalcancer context (a basic partitioned one)
+  PartitionedRebalancerContext context = rebalancerConfig.getRebalancerContext(
+      PartitionedRebalancerContext.class);
+
+  // Initialize an empty mapping of locks to participants
+  ResourceAssignment assignment = new ResourceAssignment(context.getResourceId());
+
+  // Get the list of live participants in the cluster
+  List<ParticipantId> liveParticipants = new ArrayList<ParticipantId>(
+      cluster.getLiveParticipantMap().keySet());
+
+  // Get the state model (should be a simple lock/unlock model) and the highest-priority state
+  StateModelDefId stateModelDefId = context.getStateModelDefId();
+  StateModelDefinition stateModelDef = cluster.getStateModelMap().get(stateModelDefId);
+  if (stateModelDef.getStatesPriorityList().size() < 1) {
+    LOG.error("Invalid state model definition. There should be at least one state.");
+    return assignment;
+  }
+  State lockState = stateModelDef.getTypedStatesPriorityList().get(0);
+
+  // Count the number of participants allowed to lock each lock
+  String stateCount = stateModelDef.getNumParticipantsPerState(lockState);
+  int lockHolders = 0;
+  try {
+    // a numeric value is a custom-specified number of participants allowed to lock the lock
+    lockHolders = Integer.parseInt(stateCount);
+  } catch (NumberFormatException e) {
+    LOG.error("Invalid state model definition. The lock state does not have a valid count");
+    return assignment;
+  }
+
+  // Fairly assign the lock state to the participants using a simple mod-based sequential
+  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
+  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
+  // number of participants as necessary.
+  // This assumes a simple lock-unlock model where the only state of interest is which nodes have
+  // acquired each lock.
+  int i = 0;
+  for (PartitionId partition : context.getPartitionSet()) {
+    Map<ParticipantId, State> replicaMap = new HashMap<ParticipantId, State>();
+    for (int j = i; j < i + lockHolders; j++) {
+      int participantIndex = j % liveParticipants.size();
+      ParticipantId participant = liveParticipants.get(participantIndex);
+      // enforce that a participant can only have one instance of a given lock
+      if (!replicaMap.containsKey(participant)) {
+        replicaMap.put(participant, lockState);
+      }
+    }
+    assignment.addReplicaMap(partition, replicaMap);
+    i++;
+  }
+  return assignment;
+}
+```
+
+Here is the ResourceAssignment emitted by the user-defined rebalancer for a 3-participant system whenever there is a change to the set of participants.
+
+* Participant_A joins
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_A": "LOCKED"},
+  "lock_2": { "Participant_A": "LOCKED"},
+  "lock_3": { "Participant_A": "LOCKED"},
+  "lock_4": { "Participant_A": "LOCKED"},
+  "lock_5": { "Participant_A": "LOCKED"},
+}
+```
+
+A ResourceAssignment is a mapping for each resource of partition to the participant serving each replica and the state of each replica. The state model is a simple LOCKED/RELEASED model, so participant A holds all lock partitions in the LOCKED state.
+
+* Participant_B joins
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_B": "LOCKED"},
+  "lock_2": { "Participant_A": "LOCKED"},
+  "lock_3": { "Participant_B": "LOCKED"},
+  "lock_4": { "Participant_A": "LOCKED"},
+  "lock_5": { "Participant_B": "LOCKED"},
+}
+```
+
+Now that there are two participants, the simple mod-based function assigns every other lock to the second participant. On any system change, the rebalancer is invoked so that the application can define how to redistribute its resources.
+
+* Participant_C joins (steady state)
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_B": "LOCKED"},
+  "lock_2": { "Participant_C": "LOCKED"},
+  "lock_3": { "Participant_A": "LOCKED"},
+  "lock_4": { "Participant_B": "LOCKED"},
+  "lock_5": { "Participant_C": "LOCKED"},
+}
+```
+
+This is the steady state of the system. Notice that four of the six locks now have a different owner. That is because of the naïve modulus-based assignmemt approach used by the user-defined rebalancer. However, the interface is flexible enough to allow you to employ consistent hashing or any other scheme if minimal movement is a system requirement.
+
+* Participant_B fails
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_C": "LOCKED"},
+  "lock_2": { "Participant_A": "LOCKED"},
+  "lock_3": { "Participant_C": "LOCKED"},
+  "lock_4": { "Participant_A": "LOCKED"},
+  "lock_5": { "Participant_C": "LOCKED"},
+}
+```
+
+On any node failure, as in the case of node addition, the rebalancer is invoked automatically so that it can generate a new mapping as a response to the change. Helix ensures that the Rebalancer has the opportunity to reassign locks as required by the application.
+
+* Participant_B (or the replacement for the original Participant_B) rejoins
+
+```
+{
+  "lock_0": { "Participant_A": "LOCKED"},
+  "lock_1": { "Participant_B": "LOCKED"},
+  "lock_2": { "Participant_C": "LOCKED"},
+  "lock_3": { "Participant_A": "LOCKED"},
+  "lock_4": { "Participant_B": "LOCKED"},
+  "lock_5": { "Participant_C": "LOCKED"},
+}
+```
+
+The rebalancer was invoked once again and the resulting ResourceAssignment reflects the steady state.
+
+### Caveats
+- The rebalancer class must be available at runtime, or else Helix will not attempt to rebalance at all
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/tutorial_yaml.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/tutorial_yaml.md b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_yaml.md
new file mode 100644
index 0000000..0f8e0cc
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/tutorial_yaml.md
@@ -0,0 +1,102 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - YAML Cluster Setup</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): YAML Cluster Setup
+
+As an alternative to using Helix Admin to set up the cluster, its resources, constraints, and the state model, Helix supports bootstrapping a cluster configuration based on a YAML file. Below is an annotated example of such a file for a simple distributed lock manager where a lock can only be LOCKED or RELEASED, and each lock only allows a single participant to hold it in the LOCKED state.
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster (required)
+resources:
+  - name: lock-group # unique resource name (required)
+    rebalancer: # required
+      mode: USER_DEFINED # required - USER_DEFINED means we will provide our own rebalancer
+      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer # required for USER_DEFINED
+    partitions:
+      count: 12 # number of partitions for the resource (default is 1)
+      replicas: 1 # number of replicas per partition (default is 1)
+    stateModel:
+      name: lock-unlock # model name (required)
+      states: [LOCKED, RELEASED, DROPPED] # the list of possible states (required if model not built-in)
+      transitions: # the list of possible transitions (required if model not built-in)
+        - name: Unlock
+          from: LOCKED
+          to: RELEASED
+        - name: Lock
+          from: RELEASED
+          to: LOCKED
+        - name: DropLock
+          from: LOCKED
+          to: DROPPED
+        - name: DropUnlock
+          from: RELEASED
+          to: DROPPED
+        - name: Undrop
+          from: DROPPED
+          to: RELEASED
+      initialState: RELEASED # (required if model not built-in)
+    constraints:
+      state:
+        counts: # maximum number of replicas of a partition that can be in each state (required if model not built-in)
+          - name: LOCKED
+            count: "1"
+          - name: RELEASED
+            count: "-1"
+          - name: DROPPED
+            count: "-1"
+        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority (all priorities equal if not specified)
+      transition: # transitions priority to enforce order that transitions occur
+        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock] # all priorities equal if not specified
+participants: # list of nodes that can serve replicas (optional if dynamic joining is active, required otherwise)
+  - name: localhost_12001
+    host: localhost
+    port: 12001
+  - name: localhost_12002
+    host: localhost
+    port: 12002
+  - name: localhost_12003
+    host: localhost
+    port: 12003
+```
+
+Using a file like the one above, the cluster can be set up either with the command line:
+
+```
+incubator-helix/helix-core/target/helix-core/pkg/bin/YAMLClusterSetup.sh localhost:2199 lock-manager-config.yaml
+```
+
+or with code:
+
+```
+YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
+InputStream input =
+    Thread.currentThread().getContextClassLoader()
+        .getResourceAsStream("lock-manager-config.yaml");
+YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
+```
+
+Some notes:
+
+- A rebalancer class is only required for the USER_DEFINED mode. It is ignored otherwise.
+
+- Built-in state models, like OnlineOffline, LeaderStandby, and MasterSlave, or state models that have already been added only require a name for stateModel. If partition and/or replica counts are not provided, a value of 1 is assumed.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/.htaccess
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/.htaccess b/site-releases/0.7.0-incubating/src/site/resources/.htaccess
new file mode 100644
index 0000000..d5c7bf3
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/resources/.htaccess
@@ -0,0 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+Redirect /download.html /download.cgi

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/download.cgi
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/download.cgi b/site-releases/0.7.0-incubating/src/site/resources/download.cgi
new file mode 100644
index 0000000..f9a0e30
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/resources/download.cgi
@@ -0,0 +1,22 @@
+#!/bin/sh
+# Just call the standard mirrors.cgi script. It will use download.html
+# as the input template.
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+exec /www/www.apache.org/dyn/mirrors/mirrors.cgi $*

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/HELIX-components.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/HELIX-components.png b/site-releases/0.7.0-incubating/src/site/resources/images/HELIX-components.png
new file mode 100644
index 0000000..c0c35ae
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/HELIX-components.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/PFS-Generic.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/PFS-Generic.png b/site-releases/0.7.0-incubating/src/site/resources/images/PFS-Generic.png
new file mode 100644
index 0000000..7eea3a0
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/PFS-Generic.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/RSYNC_BASED_PFS.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/RSYNC_BASED_PFS.png b/site-releases/0.7.0-incubating/src/site/resources/images/RSYNC_BASED_PFS.png
new file mode 100644
index 0000000..0cc55ae
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/RSYNC_BASED_PFS.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/bootstrap_statemodel.gif
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/bootstrap_statemodel.gif b/site-releases/0.7.0-incubating/src/site/resources/images/bootstrap_statemodel.gif
new file mode 100644
index 0000000..b8f8a42
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/bootstrap_statemodel.gif differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/helix-architecture.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/helix-architecture.png b/site-releases/0.7.0-incubating/src/site/resources/images/helix-architecture.png
new file mode 100644
index 0000000..6f69a2d
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/helix-architecture.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/helix-logo.jpg
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/helix-logo.jpg b/site-releases/0.7.0-incubating/src/site/resources/images/helix-logo.jpg
new file mode 100644
index 0000000..d6428f6
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/helix-logo.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/helix-znode-layout.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/helix-znode-layout.png b/site-releases/0.7.0-incubating/src/site/resources/images/helix-znode-layout.png
new file mode 100644
index 0000000..5bafc45
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/helix-znode-layout.png differ


[09/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/xdoc/download.xml.vm
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/xdoc/download.xml.vm b/site-releases/0.6.2-incubating/src/site/xdoc/download.xml.vm
new file mode 100644
index 0000000..9a96a7d
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/xdoc/download.xml.vm
@@ -0,0 +1,193 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+-->
+#set( $releaseName = "0.6.2-incubating" )
+#set( $releaseDate = "10/31/2013" )
+<document xmlns="http://maven.apache.org/XDOC/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+
+  <properties>
+    <title>Apache Incubator Helix Downloads</title>
+    <author email="dev@helix.incubator.apache.org">Apache Helix Documentation Team</author>
+  </properties>
+
+  <body>
+    <div class="toc_container">
+      <macro name="toc">
+        <param name="class" value="toc"/>
+      </macro>
+    </div>
+    
+    <section name="Introduction">
+      <p>Apache Helix artifacts are distributed in source and binary form under the terms of the
+        <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>.
+        See the included <tt>LICENSE</tt> and <tt>NOTICE</tt> files included in each artifact for additional license 
+        information.
+      </p>
+      <p>Use the links below to download a source distribution of Apache Helix.
+      It is good practice to <a href="#Verifying_Releases">verify the integrity</a> of the distribution files.</p>
+    </section>
+
+    <section name="Release">
+      <p>Release date: ${releaseDate} </p>
+      <p><a href="releasenotes/release-${releaseName}.html">${releaseName} Release notes</a></p>
+      <a name="mirror"/>
+      <subsection name="Mirror">
+
+        <p>
+          [if-any logo]
+          <a href="[link]">
+            <img align="right" src="[logo]" border="0"
+                 alt="logo"/>
+          </a>
+          [end]
+          The currently selected mirror is
+          <b>[preferred]</b>.
+          If you encounter a problem with this mirror,
+          please select another mirror.
+          If all mirrors are failing, there are
+          <i>backup</i>
+          mirrors
+          (at the end of the mirrors list) that should be available.
+        </p>
+
+        <form action="[location]" method="get" id="SelectMirror" class="form-inline">
+          Other mirrors:
+          <select name="Preferred" class="input-xlarge">
+            [if-any http]
+            [for http]
+            <option value="[http]">[http]</option>
+            [end]
+            [end]
+            [if-any ftp]
+            [for ftp]
+            <option value="[ftp]">[ftp]</option>
+            [end]
+            [end]
+            [if-any backup]
+            [for backup]
+            <option value="[backup]">[backup] (backup)</option>
+            [end]
+            [end]
+          </select>
+          <input type="submit" value="Change" class="btn"/>
+        </form>
+
+        <p>
+          You may also consult the
+          <a href="http://www.apache.org/mirrors/">complete list of mirrors.</a>
+        </p>
+
+      </subsection>
+      <subsection name="${releaseName} Sources">
+        <table>
+          <thead>
+            <tr>
+              <th>Artifact</th>
+              <th>Signatures</th>
+            </tr>
+          </thead>
+          <tbody>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip">helix-${releaseName}-src.zip</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.sha1">sha1</a>
+              </td>
+            </tr>
+          </tbody>
+        </table>
+      </subsection>
+      <subsection name="${releaseName} Binaries">
+        <table>
+          <thead>
+            <tr>
+              <th>Artifact</th>
+              <th>Signatures</th>
+            </tr>
+          </thead>
+          <tbody>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar">helix-core-${releaseName}-pkg.tar</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.sha1">sha1</a>
+              </td>
+            </tr>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar">helix-admin-webapp-${releaseName}-pkg.tar</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.sha1">sha1</a>
+              </td>
+            </tr>
+          </tbody>
+        </table>
+      </subsection>
+    </section>
+
+<!--    <section name="Older Releases">
+    </section>-->
+
+    <section name="Verifying Releases">
+      <p>We strongly recommend you verify the integrity of the downloaded files with both PGP and MD5.</p>
+      
+      <p>The PGP signatures can be verified using <a href="http://www.pgpi.org/">PGP</a> or 
+      <a href="http://www.gnupg.org/">GPG</a>. 
+      First download the <a href="http://www.apache.org/dist/incubator/helix/KEYS">KEYS</a> as well as the
+      <tt>*.asc</tt> signature file for the particular distribution. Make sure you get these files from the main 
+      distribution directory, rather than from a mirror. Then verify the signatures using one of the following sets of
+      commands:
+
+        <source>$ pgp -ka KEYS
+$ pgp helix-*.zip.asc</source>
+      
+        <source>$ gpg --import KEYS
+$ gpg --verify helix-*.zip.asc</source>
+       </p>
+    <p>Alternatively, you can verify the MD5 signature on the files. A Unix/Linux program called  
+      <code>md5</code> or 
+      <code>md5sum</code> is included in most distributions.  It is also available as part of
+      <a href="http://www.gnu.org/software/textutils/textutils.html">GNU Textutils</a>.
+      Windows users can get binary md5 programs from these (and likely other) places:
+      <ul>
+        <li>
+          <a href="http://www.md5summer.org/">http://www.md5summer.org/</a>
+        </li>
+        <li>
+          <a href="http://www.fourmilab.ch/md5/">http://www.fourmilab.ch/md5/</a>
+        </li>
+        <li>
+          <a href="http://www.pc-tools.net/win32/md5sums/">http://www.pc-tools.net/win32/md5sums/</a>
+        </li>
+      </ul>
+    </p>
+    </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/test/conf/testng.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/test/conf/testng.xml b/site-releases/0.6.2-incubating/src/test/conf/testng.xml
new file mode 100644
index 0000000..58f0803
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/test/conf/testng.xml
@@ -0,0 +1,27 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
+<suite name="Suite" parallel="none">
+  <test name="Test" preserve-order="false">
+    <packages>
+      <package name="org.apache.helix"/>
+    </packages>
+  </test>
+</suite>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/pom.xml b/site-releases/0.7.0-incubating/pom.xml
new file mode 100644
index 0000000..b7406a7
--- /dev/null
+++ b/site-releases/0.7.0-incubating/pom.xml
@@ -0,0 +1,51 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <groupId>org.apache.helix</groupId>
+    <artifactId>site-releases</artifactId>
+    <version>0.6.2-incubating-SNAPSHOT</version>
+  </parent>
+
+  <artifactId>0.7.0-incubating-site</artifactId>
+  <packaging>bundle</packaging>
+  <name>Apache Helix :: Site :: 0.7.0-incubating</name>
+
+  <properties>
+  </properties>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.testng</groupId>
+      <artifactId>testng</artifactId>
+      <version>6.0.1</version>
+    </dependency>
+  </dependencies>
+  <build>
+    <pluginManagement>
+      <plugins>
+      </plugins>
+    </pluginManagement>
+    <plugins>
+    </plugins>
+  </build>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/apt/privacy-policy.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/apt/privacy-policy.apt b/site-releases/0.7.0-incubating/src/site/apt/privacy-policy.apt
new file mode 100644
index 0000000..ada9363
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/apt/privacy-policy.apt
@@ -0,0 +1,52 @@
+ ----
+ Privacy Policy
+ -----
+ Olivier Lamy
+ -----
+ 2013-02-04
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one
+~~ or more contributor license agreements.  See the NOTICE file
+~~ distributed with this work for additional information
+~~ regarding copyright ownership.  The ASF licenses this file
+~~ to you under the Apache License, Version 2.0 (the
+~~ "License"); you may not use this file except in compliance
+~~ with the License.  You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing,
+~~ software distributed under the License is distributed on an
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+~~ KIND, either express or implied.  See the License for the
+~~ specific language governing permissions and limitations
+~~ under the License.
+
+Privacy Policy
+
+  Information about your use of this website is collected using server access logs and a tracking cookie. The 
+  collected information consists of the following:
+
+  [[1]] The IP address from which you access the website;
+  
+  [[2]] The type of browser and operating system you use to access our site;
+  
+  [[3]] The date and time you access our site;
+  
+  [[4]] The pages you visit; and
+  
+  [[5]] The addresses of pages from where you followed a link to our site.
+
+  []
+
+  Part of this information is gathered using a tracking cookie set by the 
+  {{{http://www.google.com/analytics/}Google Analytics}} service and handled by Google as described in their 
+  {{{http://www.google.com/privacy.html}privacy policy}}. See your browser documentation for instructions on how to 
+  disable the cookie if you prefer not to share this data with Google.
+
+  We use the gathered information to help us make our site more useful to visitors and to better understand how and 
+  when our site is used. We do not track or collect personally identifiable information or associate gathered data 
+  with any personally identifying information from other sources.
+
+  By using this website, you consent to the collection of this data in the manner and for the purpose described above.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/apt/releasenotes/release-0.7.0-incubating.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/apt/releasenotes/release-0.7.0-incubating.apt b/site-releases/0.7.0-incubating/src/site/apt/releasenotes/release-0.7.0-incubating.apt
new file mode 100644
index 0000000..7661df0
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/apt/releasenotes/release-0.7.0-incubating.apt
@@ -0,0 +1,174 @@
+ -----
+ Release Notes for Apache Helix 0.7.0-incubating
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one                      
+~~ or more contributor license agreements.  See the NOTICE file                    
+~~ distributed with this work for additional information                           
+~~ regarding copyright ownership.  The ASF licenses this file                      
+~~ to you under the Apache License, Version 2.0 (the                               
+~~ "License"); you may not use this file except in compliance                      
+~~ with the License.  You may obtain a copy of the License at                      
+~~                                                                                 
+~~   http://www.apache.org/licenses/LICENSE-2.0                                    
+~~                                                                                 
+~~ Unless required by applicable law or agreed to in writing,                      
+~~ software distributed under the License is distributed on an                     
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY                          
+~~ KIND, either express or implied.  See the License for the                       
+~~ specific language governing permissions and limitations                         
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Release Notes for Apache Helix 0.7.0-incubating
+
+  The Apache Helix team would like to announce the release of Apache Helix 0.7.0-incubating
+
+  This is the fourth release and second major release under the Apache umbrella.
+
+  Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix provides the following features:
+
+  * Automatic assignment of resource/partition to nodes
+
+  * Node failure detection and recovery
+
+  * Dynamic addition of Resources
+
+  * Dynamic addition of nodes to the cluster
+
+  * Pluggable distributed state machine to manage the state of a resource via state transitions
+
+  * Automatic load balancing and throttling of transitions
+
+  * Configurable, pluggable rebalancing
+
+  []
+
+* Changes
+
+** Sub-task
+
+    * [HELIX-18] - Unify cluster setup and helixadmin
+
+    * [HELIX-79] - consecutive GC may mess up helix session ids
+
+    * [HELIX-83] - Add typed classes to denote helix ids
+
+    * [HELIX-90] - Clean up Api's
+
+    * [HELIX-98] - clean up setting constraint api
+
+    * [HELIX-100] - Improve the helix config api
+
+    * [HELIX-102] - Add new wrapper classes for Participant, Controller, Spectator, Administrator
+
+    * [HELIX-104] - Add support to reuse zkclient
+
+    * [HELIX-123] - ZkHelixManager.isLeader() should check session id in addition to instance name
+
+    * [HELIX-139] - Need to double check the logic to prevent 2 controllers to control the same cluster
+
+    * [HELIX-168] - separate HelixManager implementation for participant, controller, and distributed controller
+
+    * [HELIX-176] - Need a list of tests that must pass to certify a release
+
+    * [HELIX-224] - Move helix examples to separate module
+
+    * [HELIX-233] - Ensure that website and wiki fully capture the updated changes in 0.7.0
+
+    * [HELIX-234] - Create concrete id classes for constructs, replacing strings
+
+    * [HELIX-235] - Create a hierarchical logical model for the cluster
+
+    * [HELIX-236] - Create a hierarchical cluster snapshot to replace ClusterDataCache
+
+    * [HELIX-237] - Create helix-internal config classes for the hierarchical model
+
+    * [HELIX-238] - Create accessors for the logical model
+
+    * [HELIX-239] - List use cases for the logical model
+
+    * [HELIX-240] - Write an example of the key use cases for the logical model
+
+    * [HELIX-241] - Write the controller pipeline with the logical model
+
+    * [HELIX-242] - Re-integrate the scheduler rebalancing into the new controller pipeline
+
+    * [HELIX-243] - Fix failing tests related to helix model overhaul
+
+    * [HELIX-244] - Redesign rebalancers using rebalancer-specific configs
+
+    * [HELIX-246] - Refactor scheduler task config to comply with new rebalancer config and fix related scheduler task tests
+
+    * [HELIX-248] - Resource logical model should be general enough to handle various resource types
+
+    * [HELIX-268] - Atomic API
+
+    * [HELIX-297] - Make 0.7.0 backward compatible for user-defined rebalancing
+
+
+** Bug
+
+    * [HELIX-40] - fix zkclient subscribe path leaking and zk callback-handler leaking in case of session expiry
+
+    * [HELIX-46] - Add REST/cli admin command for message selection constraints
+
+    * [HELIX-47] - when drop resource, remove resource-level config also
+
+    * [HELIX-48] - use resource instead of db in output messages
+
+    * [HELIX-50] - Ensure num replicas and preference list size in idealstate matches
+
+    * [HELIX-59] - controller not cleaning dead external view generated from old sessions
+
+    * [HELIX-136] - Write IdealState back to ZK when computed by custom Rebalancer
+
+    * [HELIX-200] - helix controller send ERROR->DROPPED transition infinitely
+
+    * [HELIX-214] - User-defined rebalancer should never use SEMI_AUTO code paths
+
+    * [HELIX-225] - fix helix-example package build error
+
+    * [HELIX-271] - ZkHelixAdmin#addResource() backward compatible problem
+
+    * [HELIX-292] - ZNRecordStreamingSerializer should not assume id comes first
+
+    * [HELIX-296] - HelixConnection in 0.7.0 does not remove LiveInstance znode
+
+    * [HELIX-300] - Some files in 0.7.0 are missing license headers
+
+    * [HELIX-302] - fix helix version compare bug
+
+** Improvement
+
+    * [HELIX-37] - Cleanup CallbackHandler
+
+    * [HELIX-202] - Ideal state should be a full mapping, not just a set of instance preferences
+
+** Task
+
+    * [HELIX-109] - Review Helix model package
+
+    * [HELIX-174] - Clean up ideal state calculators, move them to the controller rebalancer package
+
+    * [HELIX-212] - Rebalancer interface should have 1 function to compute the entire ideal state
+
+    * [HELIX-232] - Validation of 0.7.0
+
+    * [HELIX-290] - Ensure 0.7.0 can respond correctly to ideal state changes
+
+    * [HELIX-295] - Upgrade or remove xstream dependency
+
+    * [HELIX-301] - Update integration test utils for 0.7.0
+
+** Test
+
+    * [HELIX-286] - add a test for redefine state model definition
+
+  []
+
+  Cheers,
+  --
+  The Apache Helix Team

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/apt/releasing.apt
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/apt/releasing.apt b/site-releases/0.7.0-incubating/src/site/apt/releasing.apt
new file mode 100644
index 0000000..11d0cd9
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/apt/releasing.apt
@@ -0,0 +1,107 @@
+ -----
+ Helix release process
+ -----
+ -----
+ 2012-12-15
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one
+~~ or more contributor license agreements.  See the NOTICE file
+~~ distributed with this work for additional information
+~~ regarding copyright ownership.  The ASF licenses this file
+~~ to you under the Apache License, Version 2.0 (the
+~~ "License"); you may not use this file except in compliance
+~~ with the License.  You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing,
+~~ software distributed under the License is distributed on an
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+~~ KIND, either express or implied.  See the License for the
+~~ specific language governing permissions and limitations
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Helix release process
+
+ [[1]] Post to the dev list a few days before you plan to do an Helix release
+
+ [[2]] Your maven setting must contains the entry to be able to deploy.
+
+ ~/.m2/settings.xml
+
++-------------
+   <server>
+     <id>apache.releases.https</id>
+     <username></username>
+     <password></password>
+   </server>
++-------------
+
+ [[3]] Apache DAV passwords
+
++-------------
+ Add the following info into your ~/.netrc
+ machine git-wip-us.apache.org login <apache username> <password>
+
++-------------
+ [[4]] Release Helix
+    You should have a GPG agent running in the session you will run the maven release commands(preferred), and confirm it works by running "gpg -ab" (type some text and press Ctrl-D).
+    If you do not have a GPG agent running, make sure that you have the "apache-release" profile set in your settings.xml as shown below.
+
+   Run the release
+
++-------------
+mvn release:prepare release:perform -B
++-------------
+
+  GPG configuration in maven settings xml:
+
++-------------
+<profile>
+  <id>apache-release</id>
+  <properties>
+    <gpg.passphrase>[GPG_PASSWORD]</gpg.passphrase>
+  </properties>
+</profile>
++-------------
+
+ [[4]] go to https://repository.apache.org and close your staged repository. Note the repository url (format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
+
++-------------
+svn co https://dist.apache.org/repos/dist/dev/incubator/helix helix-dev-release
+cd helix-dev-release
+sh ./release-script-svn.sh version stagingRepoUrl
+then svn add <new directory created with new version as name>
+then svn ci 
++-------------
+
+ [[5]] Validating the release
+
++-------------
+  * Download sources, extract, build and run tests - mvn clean package
+  * Verify license headers - mvn -Prat -DskipTests
+  * Download binaries and .asc files
+  * Download release manager's public key - From the KEYS file, get the release manager's public key finger print and run  gpg --keyserver pgpkeys.mit.edu --recv-key <key>
+  * Validate authenticity of key - run  gpg --fingerprint <key>
+  * Check signatures of all the binaries using gpg <binary>
++-------------
+
+ [[6]] Call for a vote in the dev list and wait for 72 hrs. for the vote results. 3 binding votes are necessary for the release to be finalized. example
+  After the vote has passed, move the files from dist dev to dist release: svn mv https://dist.apache.org/repos/dist/dev/incubator/helix/version to https://dist.apache.org/repos/dist/release/incubator/helix/
+
+ [[7]] Prepare release note. Add a page in src/site/apt/releasenotes/ and change value of \<currentRelease> in parent pom.
+
+
+ [[8]] Send out an announcement of the release to:
+
+  * users@helix.incubator.apache.org
+
+  * dev@helix.incubator.apache.org
+
+ [[9]] Celebrate !
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/Architecture.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/Architecture.md b/site-releases/0.7.0-incubating/src/site/markdown/Architecture.md
new file mode 100644
index 0000000..933e917
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/Architecture.md
@@ -0,0 +1,252 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Architecture</title>
+</head>
+
+Architecture
+----------------------------
+Helix aims to provide the following abilities to a distributed system:
+
+* Automatic management of a cluster hosting partitioned, replicated resources.
+* Soft and hard failure detection and handling.
+* Automatic load balancing via smart placement of resources on servers(nodes) based on server capacity and resource profile (size of partition, access patterns, etc).
+* Centralized config management and self discovery. Eliminates the need to modify config on each node.
+* Fault tolerance and optimized rebalancing during cluster expansion.
+* Manages entire operational lifecycle of a node. Addition, start, stop, enable/disable without downtime.
+* Monitor cluster health and provide alerts on SLA violation.
+* Service discovery mechanism to route requests.
+
+To build such a system, we need a mechanism to co-ordinate between different nodes and other components in the system. This mechanism can be achieved with software that reacts to any change in the cluster and comes up with a set of tasks needed to bring the cluster to a stable state. The set of tasks will be assigned to one or more nodes in the cluster. Helix serves this purpose of managing the various components in the cluster.
+
+![Helix Design](images/system.png)
+
+Distributed System Components
+
+In general any distributed system cluster will have the following components and properties:
+
+* A set of nodes also referred to as instances.
+* A set of resources which can be databases, lucene indexes or tasks.
+* Each resource is also partitioned into one or more Partitions. 
+* Each partition may have one or more copies called replicas.
+* Each replica can have a state associated with it. For example Master, Slave, Leader, Standby, Online, Offline etc
+
+Roles
+-----
+
+![Helix Design](images/HELIX-components.png)
+
+Not all nodes in a distributed system will perform similar functionalities. For example, a few nodes might be serving requests and a few nodes might be sending requests, and some nodes might be controlling the nodes in the cluster. Thus, Helix categorizes nodes by their specific roles in the system.
+
+We have divided Helix nodes into 3 logical components based on their responsibilities:
+
+1. Participant: The nodes that actually host the distributed resources.
+2. Spectator: The nodes that simply observe the Participant state and route the request accordingly. Routers, for example, need to know the instance on which a partition is hosted and its state in order to route the request to the appropriate end point.
+3. Controller: The controller observes and controls the Participant nodes. It is responsible for coordinating all transitions in the cluster and ensuring that state constraints are satisfied and cluster stability is maintained. 
+
+These are simply logical components and can be deployed as per the system requirements. For example:
+
+1. The controller can be deployed as a separate service
+2. The controller can be deployed along with a Participant but only one Controller will be active at any given time.
+
+Both have pros and cons, which will be discussed later and one can chose the mode of deployment as per system needs.
+
+
+## Cluster state metadata store
+
+We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state. Helix uses Zookeeper to achieve this functionality.
+
+Zookeeper provides:
+
+* A way to represent PERSISTENT state which basically remains until its deleted.
+* A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies.
+* Notification mechanism when there is a change in PERSISTENT and EPHEMERAL state
+
+The namespace provided by ZooKeeper is much like that of a standard file system. A name is a sequence of path elements separated by a slash (/). Every node[ZNode] in ZooKeeper\'s namespace is identified by a path.
+
+More info on Zookeeper can be found at http://zookeeper.apache.org
+
+## State machine and constraints
+
+Even though the concepts of Resources, Partitions, and Replicas are common to most distributed systems, one thing that differentiates one distributed system from another is the way each partition is assigned a state and the constraints on each state.
+
+For example:
+
+1. If a system is serving read-only data then all partition\'s replicas are equal and they can either be ONLINE or OFFLINE.
+2. If a system takes _both_ reads and writes but ensure that writes go through only one partition, the states will be MASTER, SLAVE, and OFFLINE. Writes go through the MASTER and replicate to the SLAVEs. Optionally, reads can go through SLAVES.
+
+Apart from defining state for each partition, the transition path to each state can be application specific. For example, in order to become MASTER it might be a requirement to first become a SLAVE. This ensures that if the SLAVE does not have the data as part of OFFLINE-SLAVE transition it can bootstrap data from other nodes in the system.
+
+Helix provides a way to configure an application specific state machine along with constraints on each state. Along with constraints on STATE, Helix also provides a way to specify constraints on transitions.  (More on this later.)
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+![Helix Design](images/statemachine.png)
+
+## Concepts
+
+The following terminologies are used in Helix to model a state machine.
+
+* IdealState: The state in which we need the cluster to be in if all nodes are up and running. In other words, all state constraints are satisfied.
+* CurrentState: Represents the actual current state of each node in the cluster 
+* ExternalView: Represents the combined view of CurrentState of all nodes.  
+
+The goal of Helix is always to make the CurrentState of the system same as the IdealState. Some scenarios where this may not be true are:
+
+* When all nodes are down
+* When one or more nodes fail
+* New nodes are added and the partitions need to be reassigned
+
+### IdealState
+
+Helix lets the application define the IdealState on a resource basis which basically consists of:
+
+* List of partitions. Example: 64
+* Number of replicas for each partition. Example: 3
+* Node and State for each replica.
+
+Example:
+
+* Partition-1, replica-1, Master, Node-1
+* Partition-1, replica-2, Slave, Node-2
+* Partition-1, replica-3, Slave, Node-3
+* .....
+* .....
+* Partition-p, replica-3, Slave, Node-n
+
+Helix comes with various algorithms to automatically assign the partitions to nodes. The default algorithm minimizes the number of shuffles that happen when new nodes are added to the system.
+
+### CurrentState
+
+Every instance in the cluster hosts one or more partitions of a resource. Each of the partitions has a state associated with it.
+
+Example Node-1
+
+* Partition-1, Master
+* Partition-2, Slave
+* ....
+* ....
+* Partition-p, Slave
+
+### ExternalView
+
+External clients needs to know the state of each partition in the cluster and the Node hosting that partition. Helix provides one view of the system to Spectators as _ExternalView_. ExternalView is simply an aggregate of all node CurrentStates.
+
+* Partition-1, replica-1, Master, Node-1
+* Partition-1, replica-2, Slave, Node-2
+* Partition-1, replica-3, Slave, Node-3
+* .....
+* .....
+* Partition-p, replica-3, Slave, Node-n
+
+## Process Workflow
+
+Mode of operation in a cluster
+
+A node process can be one of the following:
+
+* Participant: The process registers itself in the cluster and acts on the messages received in its queue and updates the current state.  Example: a storage node in a distributed database
+* Spectator: The process is simply interested in the changes in the Externalview.
+* Controller: This process actively controls the cluster by reacting to changes in cluster state and sending messages to Participants.
+
+
+### Participant Node Process
+
+* When Node starts up, it registers itself under _LiveInstances_
+* After registering, it waits for new _Messages_ in the message queue
+* When it receives a message, it will perform the required task as indicated in the message
+* After the task is completed, depending on the task outcome it updates the CurrentState
+
+### Controller Process
+
+* Watches IdealState
+* Notified when a node goes down/comes up or node is added/removed. Watches LiveInstances and CurrentState of each node in the cluster
+* Triggers appropriate state transitions by sending message to Participants
+
+### Spectator Process
+
+* When the process starts, it asks the Helix agent to be notified of changes in ExternalView
+* Whenever it receives a notification, it reads the Externalview and performs required duties.
+
+#### Interaction between controller, participant and spectator
+
+The following picture shows how controllers, participants and spectators interact with each other.
+
+![Helix Architecture](images/helix-architecture.png)
+
+## Core algorithm
+
+* Controller gets the IdealState and the CurrentState of active storage nodes from Zookeeper
+* Compute the delta between IdealState and CurrentState for each partition across all participant nodes
+* For each partition compute tasks based on the State Machine Table. It\'s possible to configure priority on the state Transition. For example, in case of Master-Slave:
+    * Attempt mastership transfer if possible without violating constraint.
+    * Partition Addition
+    * Drop Partition 
+* Add the tasks in parallel if possible to the respective queue for each storage node (if the tasks added are mutually independent)
+* If a task is dependent on another task being completed, do not add that task
+* After any task is completed by a Participant, Controllers gets notified of the change and the State Transition algorithm is re-run until the CurrentState is same as IdealState.
+
+## Helix ZNode layout
+
+Helix organizes znodes under clusterName in multiple levels. 
+
+The top level (under the cluster name) ZNodes are all Helix-defined and in upper case:
+
+* PROPERTYSTORE: application property store
+* STATEMODELDEFES: state model definitions
+* INSTANCES: instance runtime information including current state and messages
+* CONFIGS: configurations
+* IDEALSTATES: ideal states
+* EXTERNALVIEW: external views
+* LIVEINSTANCES: live instances
+* CONTROLLER: cluster controller runtime information
+
+Under INSTANCES, there are runtime ZNodes for each instance. An instance organizes ZNodes as follows:
+
+* CURRENTSTATES
+    * sessionId
+    * resourceName
+* ERRORS
+* STATUSUPDATES
+* MESSAGES
+* HEALTHREPORT
+
+Under CONFIGS, there are different scopes of configurations:
+
+* RESOURCE: contains resource scope configurations
+* CLUSTER: contains cluster scope configurations
+* PARTICIPANT: contains participant scope configurations
+
+The following image shows an example of Helix znodes layout for a cluster named "test-cluster":
+
+![Helix znode layout](images/helix-znode-layout.png)

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/Building.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/Building.md b/site-releases/0.7.0-incubating/src/site/markdown/Building.md
new file mode 100644
index 0000000..06046d5
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/Building.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Build Instructions
+------------------
+
+Requirements: Jdk 1.6+, Maven 2.0.8+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.7.0-incubating
+mvn install package -DskipTests
+```
+
+Maven dependency
+
+```
+<dependency>
+  <groupId>org.apache.helix</groupId>
+  <artifactId>helix-core</artifactId>
+  <version>0.7.0-incubating</version>
+</dependency>
+```
+
+Download
+--------
+
+[0.7.0-incubating](./download.html)
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/Concepts.md b/site-releases/0.7.0-incubating/src/site/markdown/Concepts.md
new file mode 100644
index 0000000..fa5d0ba
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/Concepts.md
@@ -0,0 +1,275 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Concepts</title>
+</head>
+
+Concepts
+----------------------------
+
+Helix is based on the idea that a given task has the following attributes associated with it:
+
+* _Location of the task_. For example it runs on Node N1
+* _State_. For example, it is running, stopped etc.
+
+In Helix terminology, a task is referred to as a _resource_.
+
+### IdealState
+
+IdealState simply allows one to map tasks to location and state. A standard way of expressing this in Helix:
+
+```
+  "TASK_NAME" : {
+    "LOCATION" : "STATE"
+  }
+
+```
+Consider a simple case where you want to launch a task \'myTask\' on node \'N1\'. The IdealState for this can be expressed as follows:
+
+```
+{
+  "id" : "MyTask",
+  "mapFields" : {
+    "myTask" : {
+      "N1" : "ONLINE",
+    }
+  }
+}
+```
+### Partition
+
+If this task get too big to fit on one box, you might want to divide it into subtasks. Each subtask is referred to as a _partition_ in Helix. Let\'s say you want to divide the task into 3 subtasks/partitions, the IdealState can be changed as shown below. 
+
+\'myTask_0\', \'myTask_1\', \'myTask_2\' are logical names representing the partitions of myTask. Each tasks runs on N1, N2 and N3 respectively.
+
+```
+{
+  "id" : "myTask",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+  }
+ "mapFields" : {
+    "myTask_0" : {
+      "N1" : "ONLINE",
+    },
+    "myTask_1" : {
+      "N2" : "ONLINE",
+    },
+    "myTask_2" : {
+      "N3" : "ONLINE",
+    }
+  }
+}
+```
+
+### Replica
+
+Partitioning allows one to split the data/task into multiple subparts. But let\'s say the request rate for each partition increases. The common solution is to have multiple copies for each partition. Helix refers to the copy of a partition as a _replica_.  Adding a replica also increases the availability of the system during failures. One can see this methodology employed often in search systems. The index is divided into shards, and each shard has multiple copies.
+
+Let\'s say you want to add one additional replica for each task. The IdealState can simply be changed as shown below. 
+
+For increasing the availability of the system, it\'s better to place the replica of a given partition on different nodes.
+
+```
+{
+  "id" : "myIndex",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+  },
+ "mapFields" : {
+    "myIndex_0" : {
+      "N1" : "ONLINE",
+      "N2" : "ONLINE"
+    },
+    "myIndex_1" : {
+      "N2" : "ONLINE",
+      "N3" : "ONLINE"
+    },
+    "myIndex_2" : {
+      "N3" : "ONLINE",
+      "N1" : "ONLINE"
+    }
+  }
+}
+```
+
+### State 
+
+Now let\'s take a slightly more complicated scenario where a task represents a database.  Unlike an index which is in general read-only, a database supports both reads and writes. Keeping the data consistent among the replicas is crucial in distributed data stores. One commonly applied technique is to assign one replica as the MASTER and remaining replicas as SLAVEs. All writes go to the MASTER and are then replicated to the SLAVE replicas.
+
+Helix allows one to assign different states to each replica. Let\'s say you have two MySQL instances N1 and N2, where one will serve as MASTER and another as SLAVE. The IdealState can be changed to:
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+
+### State Machine and Transitions
+
+IdealState allows one to exactly specify the desired state of the cluster. Given an IdealState, Helix takes up the responsibility of ensuring that the cluster reaches the IdealState.  The Helix _controller_ reads the IdealState and then commands each Participant to take appropriate actions to move from one state to another until it matches the IdealState.  These actions are referred to as _transitions_ in Helix.
+
+The next logical question is:  how does the _controller_ compute the transitions required to get to IdealState?  This is where the finite state machine concept comes in. Helix allows applications to plug in a finite state machine.  A state machine consists of the following:
+
+* State: Describes the role of a replica
+* Transition: An action that allows a replica to move from one state to another, thus changing its role.
+
+Here is an example of MasterSlave state machine:
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+Helix allows each resource to be associated with one state machine. This means you can have one resource as an index and another as a database in the same cluster. One can associate each resource with a state machine as follows:
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+### Current State
+
+CurrentState of a resource simply represents its actual state at a Participant. In the below example:
+
+* INSTANCE_NAME: Unique name representing the process
+* SESSION_ID: ID that is automatically assigned every time a process joins the cluster
+
+```
+{
+  "id":"MyResource"
+  ,"simpleFields":{
+    ,"SESSION_ID":"13d0e34675e0002"
+    ,"INSTANCE_NAME":"node1"
+    ,"STATE_MODEL_DEF":"MasterSlave"
+  }
+  ,"mapFields":{
+    "MyResource_0":{
+      "CURRENT_STATE":"SLAVE"
+    }
+    ,"MyResource_1":{
+      "CURRENT_STATE":"MASTER"
+    }
+    ,"MyResource_2":{
+      "CURRENT_STATE":"MASTER"
+    }
+  }
+}
+```
+Each node in the cluster has its own CurrentState.
+
+### External View
+
+In order to communicate with the Participants, external clients need to know the current state of each of the Participants. The external clients are referred to as Spectators. In order to make the life of Spectator simple, Helix provides an ExternalView that is an aggregated view of the current state across all nodes. The ExternalView has a similar format as IdealState.
+
+```
+{
+  "id":"MyResource",
+  "mapFields":{
+    "MyResource_0":{
+      "N1":"SLAVE",
+      "N2":"MASTER",
+      "N3":"OFFLINE"
+    },
+    "MyResource_1":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"ERROR"
+    },
+    "MyResource_2":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"SLAVE"
+    }
+  }
+}
+```
+
+### Rebalancer
+
+The core component of Helix is the Controller which runs the Rebalancer algorithm on every cluster event. Cluster events can be one of the following:
+
+* Nodes start/stop and soft/hard failures
+* New nodes are added/removed
+* Ideal state changes
+
+There are few more examples such as configuration changes, etc.  The key takeaway: there are many ways to trigger the rebalancer.
+
+When a rebalancer is run it simply does the following:
+
+* Compares the IdealState and current state
+* Computes the transitions required to reach the IdealState
+* Issues the transitions to each Participant
+
+The above steps happen for every change in the system. Once the current state matches the IdealState, the system is considered stable which implies \'IdealState = CurrentState = ExternalView\'
+
+### Dynamic IdealState
+
+One of the things that makes Helix powerful is that IdealState can be changed dynamically. This means one can listen to cluster events like node failures and dynamically change the ideal state. Helix will then take care of triggering the respective transitions in the system.
+
+Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/Features.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/Features.md b/site-releases/0.7.0-incubating/src/site/markdown/Features.md
new file mode 100644
index 0000000..ba9d0e7
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/Features.md
@@ -0,0 +1,313 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Features</title>
+</head>
+
+Features
+----------------------------
+
+
+### CONFIGURING IDEALSTATE
+
+
+Read concepts page for definition of Idealstate.
+
+The placement of partitions in a DDS is very critical for reliability and scalability of the system. 
+For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can guarantee this.
+Helix by default comes with a variant of consistent hashing based of the RUSH algorithm. 
+
+This means given a number of partitions, replicas and number of nodes Helix does the automatic assignment of partition to nodes such that
+
+* Each node has the same number of partitions and replicas of the same partition do not stay on the same node.
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above two criteria.
+
+
+Helix provides multiple ways to control the placement and state of a replica. 
+
+```
+
+            |AUTO REBALANCE|   AUTO     |   CUSTOM  |       
+            -----------------------------------------
+   LOCATION | HELIX        |  APP       |  APP      |
+            -----------------------------------------
+      STATE | HELIX        |  HELIX     |  APP      |
+            -----------------------------------------
+```
+
+#### HELIX EXECUTION MODE 
+
+
+Idealstate is defined as the state of the DDS when all nodes are up and running and healthy. 
+Helix uses this as the target state of the system and computes the appropriate transitions needed in the system to bring it to a stable state. 
+
+Helix supports 3 different execution modes which allows application to explicitly control the placement and state of the replica.
+
+##### AUTO_REBALANCE
+
+When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. Example
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will internally compute the ideal state as 
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently alive processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+When one node fails Helix redistributes its 15 tasks to the remaining 3 nodes. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node. 
+
+#### AUTO
+
+When the idealstate mode is set to AUTO, Helix only controls STATE of the replicas where as the location of the partition is controlled by application. Example: The below idealstate indicates thats 'MyResource_0' must be only on node1 and node2.  But gives the control of assigning the STATE to Helix.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0 in N2 based on the system constraints. For example, if a system constraint specified that there should be 1 Master and if the Master failed, then node2 will be made the new master. 
+
+#### CUSTOM
+
+Helix offers a third mode called CUSTOM, in which application can completely control the placement and state of each replica. Applications will have to implement an interface that Helix will invoke when the cluster state changes. 
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that Idealstate and Currentstate converges.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+      "IDEAL_STATE_MODE" : "CUSTOM",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+For example, the current state of the system might be 'MyResource_0' -> {N1:MASTER,N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel since it might result in a transient state where both N1 and N2 are masters.
+Helix will first issue MASTER-->SLAVE to N1 and after its completed it will issue SLAVE-->MASTER to N2. 
+ 
+
+### State Machine Configuration
+
+Helix comes with 3 default state models that are most commonly used. Its possible to have multiple state models in a cluster. 
+Every resource that is added should have a reference to the state model. 
+
+* MASTER-SLAVE: Has 3 states OFFLINE,SLAVE,MASTER. Max masters is 1. Slaves will be based on the replication factor. Replication factor can be specified while adding the resource
+* ONLINE-OFFLINE: Has 2 states OFFLINE and ONLINE. Very simple state model and most applications start off with this state model.
+* LEADER-STANDBY:1 Leader and many stand bys. In general the standby's are idle.
+
+Apart from providing the state machine configuration, one can specify the constraints of states and transitions.
+
+For example one can say
+Master:1. Max number of replicas in Master state at any time is 1.
+OFFLINE-SLAVE:5 Max number of Offline-Slave transitions that can happen concurrently in the system
+
+STATE PRIORITY
+Helix uses greedy approach to satisfy the state constraints. For example if the state machine configuration says it needs 1 master and 2 slaves but only 1 node is active, Helix must promote it to master. This behavior is achieved by providing the state priority list as MASTER,SLAVE.
+
+STATE TRANSITION PRIORITY
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. 
+One can control this by overriding the priority order.
+ 
+### Config management
+
+Helix allows applications to store application specific properties. The configuration can have different scopes.
+
+* Cluster
+* Node specific
+* Resource specific
+* Partition specific
+
+Helix also provides notifications when any configs are changed. This allows applications to support dynamic configuration changes.
+
+See HelixManager.getConfigAccessor for more info
+
+### Intra cluster messaging api
+
+This is an interesting feature which is quite useful in practice. Often times, nodes in DDS requires a mechanism to interact with each other. One such requirement is a process of bootstrapping a replica.
+
+Consider a search system use case where the index replica starts up and it does not have an index. One of the commonly used solutions is to get the index from a common location or to copy the index from another replica.
+Helix provides a messaging api, that can be used to talk to other nodes in the system. The value added that Helix provides here is, message recipient can be specified in terms of resource, 
+partition, state and Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of P1. 
+Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic api and can also be used to schedule various periodic tasks in the cluster like data backups etc. 
+System Admins can also perform adhoc tasks like on demand backup or execute a system command(like rm -rf ;-)) across all nodes.
+
+```
+      ClusterMessagingService messagingService = manager.getMessagingService();
+      //CONSTRUCT THE MESSAGE
+      Message requestBackupUriRequest = new Message(
+          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+      requestBackupUriRequest
+          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+      requestBackupUriRequest.setMsgState(MessageState.NEW);
+      //SET THE RECIPIENT CRITERIA, All nodes that satisfy the criteria will receive the message
+      Criteria recipientCriteria = new Criteria();
+      recipientCriteria.setInstanceName("%");
+      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      recipientCriteria.setResource("MyDB");
+      recipientCriteria.setPartition("");
+      //Should be processed only the process that is active at the time of sending the message. 
+      //This means if the recipient is restarted after message is sent, it will not be processed.
+      recipientCriteria.setSessionSpecific(true);
+      // wait for 30 seconds
+      int timeout = 30000;
+      //The handler that will be invoked when any recipient responds to the message.
+      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+      //This will return only after all recipients respond or after timeout.
+      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+          requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.getMessagingService for more info.
+
+
+### Application specific property storage
+
+There are several usecases where applications needs support for distributed data structures. Helix uses Zookeeper to store the application data and hence provides notifications when the data changes. 
+One value add Helix provides is the ability to specify cache the data and also write through cache. This is more efficient than reading from ZK every time.
+
+See HelixManager.getHelixPropertyStore
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight but some might involve moving data around which is quite expensive.
+Helix allows applications to set threshold on transitions. The threshold can be set at the multiple scopes.
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per node max transitions in parallel.
+
+See HelixManager.getHelixAdmin.addMessageConstraint() 
+
+### Health monitoring and alerting
+
+This in currently in development mode, not yet productionized.
+
+Helix provides ability for each node in the system to report health metrics on a periodic basis. 
+Helix supports multiple ways to aggregate these metrics like simple SUM, AVG, EXPONENTIAL DECAY, WINDOW. Helix will only persist the aggregated value.
+Applications can define threshold on the aggregate values according to the SLA's and when the SLA is violated Helix will fire an alert. 
+Currently Helix only fires an alert but eventually we plan to use this metrics to either mark the node dead or load balance the partitions. 
+This feature will be valuable in for distributed systems that support multi-tenancy and have huge variation in work load patterns. Another place this can be used is to detect skewed partitions and rebalance the cluster.
+
+This feature is not yet stable and do not recommend to be used in production.
+
+
+### Controller deployment modes
+
+Read Architecture wiki for more details on the Role of a controller. In simple words, it basically controls the participants in the cluster by issuing transitions.
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+Controller can be started as a separate process to manage a cluster. This is the recommended approach. How ever since one controller can be a single point of failure, multiple controller processes are required for reliability.
+Even if multiple controllers are running only one will be actively managing the cluster at any time and is decided by a leader election process. If the leader fails, another leader will resume managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participant. 
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool feature we added in helix was use a set of controllers to manage a large number of clusters. 
+For example if you have X clusters to be managed, instead of deploying X*3(3 controllers for fault tolerance) controllers for each cluster, one can deploy only 3 controllers. Each controller can manage X/3 clusters. 
+If any controller fails the remaining two will manage X/2 clusters. At LinkedIn, we always deploy controllers in this mode. 
+
+
+
+
+
+
+
+ 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/markdown/Quickstart.md b/site-releases/0.7.0-incubating/src/site/markdown/Quickstart.md
new file mode 100644
index 0000000..b4f095b
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/markdown/Quickstart.md
@@ -0,0 +1,626 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Quickstart</title>
+</head>
+
+Get Helix
+---------
+
+First, let\'s get Helix, either build it, or download.
+
+### Build
+
+    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+    cd incubator-helix
+    git checkout tags/helix-0.7.0-incubating
+    ./build
+    cd helix-core/target/helix-core-pkg/bin //This folder contains all the scripts used in following sections
+    chmod +x *
+
+### Download
+
+Download the 0.7.0-incubating release package [here](./download.html) 
+
+Overview
+--------
+
+In this Quickstart, we\'ll set up a master-slave replicated, partitioned system.  Then we\'ll demonstrate how to add a node, rebalance the partitions, and show how Helix manages failover.
+
+
+Let\'s Do It
+------------
+
+Helix provides command line interfaces to set up the cluster and view the cluster state. The best way to understand how Helix views a cluster is to build a cluster.
+
+#### First, get to the tools directory
+
+If you built the code
+
+```
+cd incubator-helix/helix-core/target/helix-core-pkg/bin
+```
+
+If you downloaded the release package, extract it.
+
+
+Short Version
+-------------
+You can observe the components working together in this demo, which does the following:
+
+* Create a cluster
+* Add 2 nodes (participants) to the cluster
+* Set up a resource with 6 partitions and 2 replicas: 1 Master, and 1 Slave per partition
+* Show the cluster state after Helix balances the partitions
+* Add a third node
+* Show the cluster state.  Note that the third node has taken mastership of 2 partitions.
+* Kill the third node (Helix takes care of failover)
+* Show the cluster state.  Note that the two surviving nodes take over mastership of the partitions from the failed node
+
+##### Run the demo
+
+```
+cd incubator-helix/helix-core/target/helix-core-pkg/bin
+./quickstart.sh
+```
+
+##### 2 nodes are set up and the partitions rebalanced
+
+The cluster state is as follows:
+
+```
+CLUSTER STATE: After starting 2 nodes
+	                     localhost_12000	localhost_12001	
+	       MyResource_0	M			S		
+	       MyResource_1	S			M		
+	       MyResource_2	M			S		
+	       MyResource_3	M			S		
+	       MyResource_4	S			M  
+	       MyResource_5	S			M  
+```
+
+Note there is one master and one slave per partition.
+
+##### A third node is added and the cluster rebalanced
+
+The cluster state changes to:
+
+```
+CLUSTER STATE: After adding a third node
+                 	       localhost_12000	    localhost_12001	localhost_12002	
+	       MyResource_0	    S			  M		      S		
+	       MyResource_1	    S			  S		      M	 
+	       MyResource_2	    M			  S	              S  
+	       MyResource_3	    S			  S                   M  
+	       MyResource_4	    M			  S	              S  
+	       MyResource_5	    S			  M                   S  
+```
+
+Note there is one master and _two_ slaves per partition.  This is expected because there are three nodes.
+
+##### Finally, a node is killed to simulate a failure
+
+Helix makes sure each partition has a master.  The cluster state changes to:
+
+```
+CLUSTER STATE: After the 3rd node stops/crashes
+                	       localhost_12000	  localhost_12001	localhost_12002	
+	       MyResource_0	    S			M		      -		
+	       MyResource_1	    S			M		      -	 
+	       MyResource_2	    M			S	              -  
+	       MyResource_3	    M			S                     -  
+	       MyResource_4	    M			S	              -  
+	       MyResource_5	    S			M                     -  
+```
+
+
+Long Version
+------------
+Now you can run the same steps by hand.  In the detailed version, we\'ll do the following:
+
+* Define a cluster
+* Add two nodes to the cluster
+* Add a 6-partition resource with 1 master and 2 slave replicas per partition
+* Verify that the cluster is healthy and inspect the Helix view
+* Expand the cluster: add a few nodes and rebalance the partitions
+* Failover: stop a node and verify the mastership transfer
+
+### Install and Start Zookeeper
+
+Zookeeper can be started in standalone mode or replicated mode.
+
+More info is available at 
+
+* http://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
+* http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_zkMulitServerSetup
+
+In this example, let\'s start zookeeper in local mode.
+
+##### start zookeeper locally on port 2199
+
+    ./start-standalone-zookeeper.sh 2199 &
+
+### Define the Cluster
+
+The helix-admin tool is used for cluster administration tasks. In the Quickstart, we\'ll use the command line interface. Helix supports a REST interface as well.
+
+zookeeper_address is of the format host:port e.g localhost:2199 for standalone or host1:port,host2:port for multi-node.
+
+Next, we\'ll set up a cluster MYCLUSTER cluster with these attributes:
+
+* 3 instances running on localhost at ports 12913,12914,12915 
+* One database named myDB with 6 partitions 
+* Each partition will have 3 replicas with 1 master, 2 slaves
+* zookeeper running locally at localhost:2199
+
+##### Create the cluster MYCLUSTER
+    ## helix-admin.sh --zkSvr <zk_address> --addCluster <clustername> 
+    ./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER 
+
+##### Add nodes to the cluster
+
+In this case we\'ll add three nodes: localhost:12913, localhost:12914, localhost:12915
+
+    ## helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+
+#### Define the resource and partitioning
+
+In this example, the resource is a database, partitioned 6 ways.  (In a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.)
+
+##### Create a database with 6 partitions using the MasterSlave state model. 
+
+Helix ensures there will be exactly one master for each partition.
+
+    ## helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
+    ./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
+   
+##### Now we can let Helix assign partitions to nodes. 
+
+This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
+
+    ## helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
+    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+
+Now the cluster is defined in Zookeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model).  And the _ideal state_ has been calculated, assuming a replication factor of 3.
+
+##### Start the Helix Controller
+
+Now that the cluster is defined in Zookeeper, the Helix controller can manage the cluster.
+
+    ## Start the cluster manager, which will manage MYCLUSTER
+    ./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+
+##### Start up the cluster to be managed
+
+We\'ve started up Zookeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
+
+    # start up each instance.  These are mock implementations that are actively managed by Helix
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log 
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+
+
+#### Inspect the Cluster
+
+Now, let\'s see the Helix view of our cluster.  We\'ll work our way down as follows:
+
+```
+Clusters -> MYCLUSTER -> instances -> instance detail
+                      -> resources -> resource detail
+                      -> partitions
+```
+
+A single Helix controller can manage multiple clusters, though so far, we\'ve only defined one cluster.  Let\'s see:
+
+```
+## List existing clusters
+./helix-admin.sh --zkSvr localhost:2199 --listClusters        
+
+Existing clusters:
+MYCLUSTER
+```
+                                       
+Now, let\'s see the Helix view of MYCLUSTER
+
+```
+## helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName> 
+./helix-admin.sh --zkSvr localhost:2199 --listClusterInfo MYCLUSTER
+
+Existing resources in cluster MYCLUSTER:
+myDB
+Instances in cluster MYCLUSTER:
+localhost_12915
+localhost_12914
+localhost_12913
+```
+
+
+Let\'s look at the details of an instance
+
+```
+## ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>    
+./helix-admin.sh --zkSvr localhost:2199 --listInstanceInfo MYCLUSTER localhost_12913
+
+InstanceConfig: {
+  "id" : "localhost_12913",
+  "mapFields" : {
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "HELIX_ENABLED" : "true",
+    "HELIX_HOST" : "localhost",
+    "HELIX_PORT" : "12913"
+  }
+}
+```
+
+    
+##### Query info of a resource
+
+```
+## helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_1" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_4" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12914", "localhost_12913", "localhost_12915" ],
+    "myDB_1" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12915", "localhost_12914" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
+    "myDB_4" : [ "localhost_12913", "localhost_12914", "localhost_12915" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_1" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_4" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+Now, let\'s look at one of the partitions:
+
+    ## helix-admin.sh --zkSvr <zk_address> --listPartitionInfo <clusterName> <resource> <partition> 
+    ./helix-admin.sh --zkSvr localhost:2199 --listPartitionInfo MYCLUSTER myDB myDB_0
+
+#### Expand the Cluster
+
+Next, we\'ll show how Helix does the work that you\'d otherwise have to build into your system.  When you add capacity to your cluster, you want the work to be evenly distributed.  In this example, we started with 3 nodes, with 6 partitions.  The partitions were evenly balanced, 2 masters and 4 slaves per node. Let\'s add 3 more nodes: localhost:12916, localhost:12917, localhost:12918
+
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+
+And start up these instances:
+
+    # start up each instance.  These are mock implementations that are actively managed by Helix
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+
+
+And now, let Helix do the work for you.  To shift the work, simply rebalance.  After the rebalance, each node will have one master and two slaves.
+
+    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+
+#### View the cluster
+
+OK, let\'s see how it looks:
+
+
+```
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
+    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12917", "localhost_12918" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12917", "localhost_12918" ],
+    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+Mission accomplished.  The partitions are nicely balanced.
+
+#### How about Failover?
+
+Building a fault tolerant system isn\'t trivial, but with Helix, it\'s easy.  Helix detects a failed instance, and triggers mastership transfer automatically.
+
+First, let's fail an instance.  In this example, we\'ll kill localhost:12918 to simulate a failure.
+
+We lost localhost:12918, so myDB_1 lost its MASTER.  Helix can fix that, it will transfer mastership to a healthy node that is currently a SLAVE, say localhost:12197.  Helix balances the load as best as it can, given there are 6 partitions on 5 nodes.  Let\'s see:
+
+
+```
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
+    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12918", "localhost_12917" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12918", "localhost_12917" ],
+    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+As we\'ve seen in this Quickstart, Helix takes care of partitioning, load balancing, elasticity, failure detection and recovery.
+
+##### ZooInspector
+
+You can view all of the underlying data by going direct to zookeeper.  Use ZooInspector that comes with zookeeper to browse the data. This is a java applet (make sure you have X windows)
+
+To start zooinspector run the following command from <zk_install_directory>/contrib/ZooInspector
+      
+    java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
+
+#### Next
+
+Now that you understand the idea of Helix, read the [tutorial](./tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
+


[11/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md b/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
new file mode 100644
index 0000000..533a48c
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Quickstart.md
@@ -0,0 +1,626 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Quickstart</title>
+</head>
+
+Get Helix
+---------
+
+First, let\'s get Helix, either build it, or download.
+
+### Build
+
+    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+    cd incubator-helix
+    git checkout tags/helix-0.6.2-incubating
+    ./build
+    cd helix-core/target/helix-core-pkg/bin //This folder contains all the scripts used in following sections
+    chmod +x *
+
+### Download
+
+Download the 0.6.2-incubating release package [here](./download.html) 
+
+Overview
+--------
+
+In this Quickstart, we\'ll set up a master-slave replicated, partitioned system.  Then we\'ll demonstrate how to add a node, rebalance the partitions, and show how Helix manages failover.
+
+
+Let\'s Do It
+------------
+
+Helix provides command line interfaces to set up the cluster and view the cluster state. The best way to understand how Helix views a cluster is to build a cluster.
+
+#### First, get to the tools directory
+
+If you built the code
+
+```
+cd incubator-helix/helix-core/target/helix-core-pkg/bin
+```
+
+If you downloaded the release package, extract it.
+
+
+Short Version
+-------------
+You can observe the components working together in this demo, which does the following:
+
+* Create a cluster
+* Add 2 nodes (participants) to the cluster
+* Set up a resource with 6 partitions and 2 replicas: 1 Master, and 1 Slave per partition
+* Show the cluster state after Helix balances the partitions
+* Add a third node
+* Show the cluster state.  Note that the third node has taken mastership of 2 partitions.
+* Kill the third node (Helix takes care of failover)
+* Show the cluster state.  Note that the two surviving nodes take over mastership of the partitions from the failed node
+
+##### Run the demo
+
+```
+cd incubator-helix/helix-core/target/helix-core-pkg/bin
+./quickstart.sh
+```
+
+##### 2 nodes are set up and the partitions rebalanced
+
+The cluster state is as follows:
+
+```
+CLUSTER STATE: After starting 2 nodes
+	                     localhost_12000	localhost_12001	
+	       MyResource_0	M			S		
+	       MyResource_1	S			M		
+	       MyResource_2	M			S		
+	       MyResource_3	M			S		
+	       MyResource_4	S			M  
+	       MyResource_5	S			M  
+```
+
+Note there is one master and one slave per partition.
+
+##### A third node is added and the cluster rebalanced
+
+The cluster state changes to:
+
+```
+CLUSTER STATE: After adding a third node
+                 	       localhost_12000	    localhost_12001	localhost_12002	
+	       MyResource_0	    S			  M		      S		
+	       MyResource_1	    S			  S		      M	 
+	       MyResource_2	    M			  S	              S  
+	       MyResource_3	    S			  S                   M  
+	       MyResource_4	    M			  S	              S  
+	       MyResource_5	    S			  M                   S  
+```
+
+Note there is one master and _two_ slaves per partition.  This is expected because there are three nodes.
+
+##### Finally, a node is killed to simulate a failure
+
+Helix makes sure each partition has a master.  The cluster state changes to:
+
+```
+CLUSTER STATE: After the 3rd node stops/crashes
+                	       localhost_12000	  localhost_12001	localhost_12002	
+	       MyResource_0	    S			M		      -		
+	       MyResource_1	    S			M		      -	 
+	       MyResource_2	    M			S	              -  
+	       MyResource_3	    M			S                     -  
+	       MyResource_4	    M			S	              -  
+	       MyResource_5	    S			M                     -  
+```
+
+
+Long Version
+------------
+Now you can run the same steps by hand.  In the detailed version, we\'ll do the following:
+
+* Define a cluster
+* Add two nodes to the cluster
+* Add a 6-partition resource with 1 master and 2 slave replicas per partition
+* Verify that the cluster is healthy and inspect the Helix view
+* Expand the cluster: add a few nodes and rebalance the partitions
+* Failover: stop a node and verify the mastership transfer
+
+### Install and Start Zookeeper
+
+Zookeeper can be started in standalone mode or replicated mode.
+
+More info is available at 
+
+* http://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
+* http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_zkMulitServerSetup
+
+In this example, let\'s start zookeeper in local mode.
+
+##### start zookeeper locally on port 2199
+
+    ./start-standalone-zookeeper.sh 2199 &
+
+### Define the Cluster
+
+The helix-admin tool is used for cluster administration tasks. In the Quickstart, we\'ll use the command line interface. Helix supports a REST interface as well.
+
+zookeeper_address is of the format host:port e.g localhost:2199 for standalone or host1:port,host2:port for multi-node.
+
+Next, we\'ll set up a cluster MYCLUSTER cluster with these attributes:
+
+* 3 instances running on localhost at ports 12913,12914,12915 
+* One database named myDB with 6 partitions 
+* Each partition will have 3 replicas with 1 master, 2 slaves
+* zookeeper running locally at localhost:2199
+
+##### Create the cluster MYCLUSTER
+    ## helix-admin.sh --zkSvr <zk_address> --addCluster <clustername> 
+    ./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER 
+
+##### Add nodes to the cluster
+
+In this case we\'ll add three nodes: localhost:12913, localhost:12914, localhost:12915
+
+    ## helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+
+#### Define the resource and partitioning
+
+In this example, the resource is a database, partitioned 6 ways.  (In a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.)
+
+##### Create a database with 6 partitions using the MasterSlave state model. 
+
+Helix ensures there will be exactly one master for each partition.
+
+    ## helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
+    ./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
+   
+##### Now we can let Helix assign partitions to nodes. 
+
+This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
+
+    ## helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
+    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+
+Now the cluster is defined in Zookeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model).  And the _ideal state_ has been calculated, assuming a replication factor of 3.
+
+##### Start the Helix Controller
+
+Now that the cluster is defined in Zookeeper, the Helix controller can manage the cluster.
+
+    ## Start the cluster manager, which will manage MYCLUSTER
+    ./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+
+##### Start up the cluster to be managed
+
+We\'ve started up Zookeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
+
+    # start up each instance.  These are mock implementations that are actively managed by Helix
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log 
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+
+
+#### Inspect the Cluster
+
+Now, let\'s see the Helix view of our cluster.  We\'ll work our way down as follows:
+
+```
+Clusters -> MYCLUSTER -> instances -> instance detail
+                      -> resources -> resource detail
+                      -> partitions
+```
+
+A single Helix controller can manage multiple clusters, though so far, we\'ve only defined one cluster.  Let\'s see:
+
+```
+## List existing clusters
+./helix-admin.sh --zkSvr localhost:2199 --listClusters        
+
+Existing clusters:
+MYCLUSTER
+```
+                                       
+Now, let\'s see the Helix view of MYCLUSTER
+
+```
+## helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName> 
+./helix-admin.sh --zkSvr localhost:2199 --listClusterInfo MYCLUSTER
+
+Existing resources in cluster MYCLUSTER:
+myDB
+Instances in cluster MYCLUSTER:
+localhost_12915
+localhost_12914
+localhost_12913
+```
+
+
+Let\'s look at the details of an instance
+
+```
+## ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>    
+./helix-admin.sh --zkSvr localhost:2199 --listInstanceInfo MYCLUSTER localhost_12913
+
+InstanceConfig: {
+  "id" : "localhost_12913",
+  "mapFields" : {
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "HELIX_ENABLED" : "true",
+    "HELIX_HOST" : "localhost",
+    "HELIX_PORT" : "12913"
+  }
+}
+```
+
+    
+##### Query info of a resource
+
+```
+## helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_1" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_4" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12914", "localhost_12913", "localhost_12915" ],
+    "myDB_1" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12915", "localhost_12914" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
+    "myDB_4" : [ "localhost_12913", "localhost_12914", "localhost_12915" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_1" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_4" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+Now, let\'s look at one of the partitions:
+
+    ## helix-admin.sh --zkSvr <zk_address> --listPartitionInfo <clusterName> <resource> <partition> 
+    ./helix-admin.sh --zkSvr localhost:2199 --listPartitionInfo MYCLUSTER myDB myDB_0
+
+#### Expand the Cluster
+
+Next, we\'ll show how Helix does the work that you\'d otherwise have to build into your system.  When you add capacity to your cluster, you want the work to be evenly distributed.  In this example, we started with 3 nodes, with 6 partitions.  The partitions were evenly balanced, 2 masters and 4 slaves per node. Let\'s add 3 more nodes: localhost:12916, localhost:12917, localhost:12918
+
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+
+And start up these instances:
+
+    # start up each instance.  These are mock implementations that are actively managed by Helix
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+
+
+And now, let Helix do the work for you.  To shift the work, simply rebalance.  After the rebalance, each node will have one master and two slaves.
+
+    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+
+#### View the cluster
+
+OK, let\'s see how it looks:
+
+
+```
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
+    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12917", "localhost_12918" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12917", "localhost_12918" ],
+    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+Mission accomplished.  The partitions are nicely balanced.
+
+#### How about Failover?
+
+Building a fault tolerant system isn\'t trivial, but with Helix, it\'s easy.  Helix detects a failed instance, and triggers mastership transfer automatically.
+
+First, let's fail an instance.  In this example, we\'ll kill localhost:12918 to simulate a failure.
+
+We lost localhost:12918, so myDB_1 lost its MASTER.  Helix can fix that, it will transfer mastership to a healthy node that is currently a SLAVE, say localhost:12197.  Helix balances the load as best as it can, given there are 6 partitions on 5 nodes.  Let\'s see:
+
+
+```
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
+    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12918", "localhost_12917" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12918", "localhost_12917" ],
+    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+As we\'ve seen in this Quickstart, Helix takes care of partitioning, load balancing, elasticity, failure detection and recovery.
+
+##### ZooInspector
+
+You can view all of the underlying data by going direct to zookeeper.  Use ZooInspector that comes with zookeeper to browse the data. This is a java applet (make sure you have X windows)
+
+To start zooinspector run the following command from <zk_install_directory>/contrib/ZooInspector
+      
+    java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
+
+#### Next
+
+Now that you understand the idea of Helix, read the [tutorial](./tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md b/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
new file mode 100644
index 0000000..61221b7
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/Tutorial.md
@@ -0,0 +1,205 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial</title>
+</head>
+
+# Helix Tutorial
+
+In this tutorial, we will cover the roles of a Helix-managed cluster, and show the code you need to write to integrate with it.  In many cases, there is a simple default behavior that is often appropriate, but you can also customize the behavior.
+
+Convention: we first cover the _basic_ approach, which is the easiest to implement.  Then, we'll describe _advanced_ options, which give you more control over the system behavior, but require you to write more code.
+
+
+### Prerequisites
+
+1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
+2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
+3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
+
+### Tutorial Outline
+
+1. [Participant](./tutorial_participant.html)
+2. [Spectator](./tutorial_spectator.html)
+3. [Controller](./tutorial_controller.html)
+4. [Rebalancing Algorithms](./tutorial_rebalance.html)
+5. [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
+6. [State Machines](./tutorial_state.html)
+7. [Messaging](./tutorial_messaging.html)
+8. [Customized health check](./tutorial_health.html)
+9. [Throttling](./tutorial_throttling.html)
+10. [Application Property Store](./tutorial_propstore.html)
+11. [Admin Interface](./tutorial_admin.html)
+12. [YAML Cluster Setup](./tutorial_yaml.html)
+
+### Preliminaries
+
+First, we need to set up the system.  Let\'s walk through the steps in building a distributed system using Helix.
+
+### Start Zookeeper
+
+This starts a zookeeper in standalone mode. For production deployment, see [Apache Zookeeper](http://zookeeper.apache.org) for instructions.
+
+```
+    ./start-standalone-zookeeper.sh 2199 &
+```
+
+### Create a cluster
+
+Creating a cluster will define the cluster in appropriate znodes on zookeeper.   
+
+Using the java API:
+
+```
+    // Create setup tool instance
+    // Note: ZK_ADDRESS is the host:port of Zookeeper
+    String ZK_ADDRESS = "localhost:2199";
+    admin = new ZKHelixAdmin(ZK_ADDRESS);
+
+    String CLUSTER_NAME = "helix-demo";
+    //Create cluster namespace in zookeeper
+    admin.addCluster(CLUSTER_NAME);
+```
+
+OR
+
+Using the command-line interface:
+
+```
+    ./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo 
+```
+
+
+### Configure the nodes of the cluster
+
+First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable. 
+The most commonly used convention is hostname:port.
+
+```
+    String CLUSTER_NAME = "helix-demo";
+    int NUM_NODES = 2;
+    String hosts[] = new String[]{"localhost","localhost"};
+    String ports[] = new String[]{7000,7001};
+    for (int i = 0; i < NUM_NODES; i++)
+    {
+      
+      InstanceConfig instanceConfig = new InstanceConfig(hosts[i]+ "_" + ports[i]);
+      instanceConfig.setHostName(hosts[i]);
+      instanceConfig.setPort(ports[i]);
+      instanceConfig.setInstanceEnabled(true);
+
+      //Add additional system specific configuration if needed. These can be accessed during the node start up.
+      instanceConfig.getRecord().setSimpleField("key", "value");
+      admin.addInstance(CLUSTER_NAME, instanceConfig);
+      
+    }
+```
+
+### Configure the resource
+
+A _resource_ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
+A _resource_ can be divided into many sub-parts known as _partitions_.
+
+
+#### Define the _state model_ and _constraints_
+
+For scalability and fault tolerance, each partition can have one or more replicas. 
+The _state model_ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
+A simple model is ONLINE-OFFLINE where ONLINE means the task is active and OFFLINE means it\'s not active.
+You can also specify how many replicas must be in each state, these are known as _constraints_.
+For example, in a search system, one might need more than one node serving the same index to handle the load.
+
+The allowed states: 
+
+* MASTER
+* SLAVE
+* OFFLINE
+
+The allowed transitions: 
+
+* OFFLINE to SLAVE
+* SLAVE to OFFLINE
+* SLAVE to MASTER
+* MASTER to SLAVE
+
+The constraints:
+
+* no more than 1 MASTER per partition
+* the rest of the replicas should be slaves
+
+The following snippet shows how to declare the _state model_ and _constraints_ for the MASTER-SLAVE model.
+
+```
+
+    StateModelDefinition.Builder builder = new StateModelDefinition.Builder(STATE_MODEL_NAME);
+
+    // Add states and their rank to indicate priority. A lower rank corresponds to a higher priority
+    builder.addState(MASTER, 1);
+    builder.addState(SLAVE, 2);
+    builder.addState(OFFLINE);
+
+    // Set the initial state when the node starts
+    builder.initialState(OFFLINE);
+
+    // Add transitions between the states.
+    builder.addTransition(OFFLINE, SLAVE);
+    builder.addTransition(SLAVE, OFFLINE);
+    builder.addTransition(SLAVE, MASTER);
+    builder.addTransition(MASTER, SLAVE);
+
+    // set constraints on states.
+
+    // static constraint: upper bound of 1 MASTER
+    builder.upperBound(MASTER, 1);
+
+    // dynamic constraint: R means it should be derived based on the replication factor for the cluster
+    // this allows a different replication factor for each resource without 
+    // having to define a new state model
+    //
+    builder.dynamicUpperBound(SLAVE, "R");
+
+    StateModelDefinition statemodelDefinition = builder.build();
+    admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, myStateModel);
+```
+
+#### Assigning partitions to nodes
+
+The final goal of Helix is to ensure that the constraints on the state model are satisfied. 
+Helix does this by assigning a STATE to a partition (such as MASTER, SLAVE), and placing it on a particular node.
+
+There are 3 assignment modes Helix can operate on
+
+* FULL_AUTO: Helix decides the placement and state of a partition.
+* SEMI_AUTO: Application decides the placement but Helix decides the state of a partition.
+* CUSTOMIZED: Application controls the placement and state of a partition.
+
+For more info on the assignment modes, see [Rebalancing Algorithms](./tutorial_rebalance.html) section of the tutorial.
+
+```
+    String RESOURCE_NAME = "MyDB";
+    int NUM_PARTITIONS = 6;
+    STATE_MODEL_NAME = "MasterSlave";
+    String MODE = "SEMI_AUTO";
+    int NUM_REPLICAS = 2;
+
+    admin.addResource(CLUSTER_NAME, RESOURCE_NAME, NUM_PARTITIONS, STATE_MODEL_NAME, MODE);
+    admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/index.md b/site-releases/0.6.2-incubating/src/site/markdown/index.md
new file mode 100644
index 0000000..a09a70d
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/index.md
@@ -0,0 +1,58 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Home</title>
+</head>
+
+Navigating the Documentation
+----------------------------
+
+### Conceptual Understanding
+
+[Concepts / Terminology](./Concepts.html)
+
+[Architecture](./Architecture.html)
+
+### Hands-on Helix
+
+[Getting Helix](./Building.html)
+
+[Quickstart](./Quickstart.html)
+
+[Tutorial](./Tutorial.html)
+
+[Javadocs](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/index.html)
+
+### Recipes
+
+[Distributed lock manager](./recipes/lock_manager.html)
+
+[Rabbit MQ consumer group](./recipes/rabbitmq_consumer_group.html)
+
+[Rsync replicated file store](./recipes/rsync_replicated_file_store.html)
+
+[Service discovery](./recipes/service_discovery.html)
+
+[Distributed Task DAG Execution](./recipes/task_dag_execution.html)
+
+### Download
+
+[0.6.2-incubating](./download.html)
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
new file mode 100644
index 0000000..252ace7
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/lock_manager.md
@@ -0,0 +1,253 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Distributed lock manager
+------------------------
+Distributed locks are used to synchronize accesses shared resources. Most applications use Zookeeper to model the distributed locks. 
+
+The simplest way to model a lock using zookeeper is (See Zookeeper leader recipe for an exact and more advanced solution)
+
+* Each process tries to create an emphemeral node.
+* If can successfully create it then, it acquires the lock
+* Else it will watch on the znode and try to acquire the lock again if the current lock holder disappears 
+
+This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in
+
+* Uneven distribution of locks among nodes, the node that starts first will acquire all the lock. Nodes that start later will be idle.
+* When a node fails, how the locks will be distributed among remaining nodes is not predicable. 
+* When new nodes are added the current nodes dont relinquish the locks so that new nodes can acquire some locks
+
+In other words we want a system to satisfy the following requirements.
+
+* Distribute locks evenly among all nodes to get better hardware utilization
+* If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
+* If nodes are added, locks must be evenly re-distributed among nodes.
+
+Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied. 
+
+To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
+
+----------------------------------------------------------------------------------------
+
+#### Short version
+ This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
+ 
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin
+chmod +x *
+./lock-manager-demo
+```
+
+##### Output
+
+```
+./lock-manager-demo 
+STARTING localhost_12000
+STARTING localhost_12002
+STARTING localhost_12001
+STARTED localhost_12000
+STARTED localhost_12002
+STARTED localhost_12001
+localhost_12001 acquired lock:lock-group_3
+localhost_12000 acquired lock:lock-group_8
+localhost_12001 acquired lock:lock-group_2
+localhost_12001 acquired lock:lock-group_4
+localhost_12002 acquired lock:lock-group_1
+localhost_12002 acquired lock:lock-group_10
+localhost_12000 acquired lock:lock-group_7
+localhost_12001 acquired lock:lock-group_5
+localhost_12002 acquired lock:lock-group_11
+localhost_12000 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_0
+localhost_12000 acquired lock:lock-group_9
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12000
+lock-group_7    localhost_12000
+lock-group_8    localhost_12000
+lock-group_9    localhost_12000
+Stopping localhost_12000
+localhost_12000 Interrupted
+localhost_12001 acquired lock:lock-group_9
+localhost_12001 acquired lock:lock-group_8
+localhost_12002 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_7
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12002
+lock-group_7    localhost_12002
+lock-group_8    localhost_12001
+lock-group_9    localhost_12001
+
+```
+
+----------------------------------------------------------------------------------------
+
+#### Long version
+This provides more details on how to setup the cluster and where to plugin application code.
+
+##### start zookeeper
+
+```
+./start-standalone-zookeeper 2199
+```
+
+##### Create a cluster
+
+```
+./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
+```
+
+##### Create a lock group
+
+Create a lock group and specify the number of locks in the lock group. 
+
+```
+./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline FULL_AUTO
+```
+
+##### Start the nodes
+
+Create a Lock class that handles the callbacks. 
+
+```
+
+public class Lock extends StateModel
+{
+  private String lockName;
+
+  public Lock(String lockName)
+  {
+    this.lockName = lockName;
+  }
+
+  public void lock(Message m, NotificationContext context)
+  {
+    System.out.println(" acquired lock:"+ lockName );
+  }
+
+  public void release(Message m, NotificationContext context)
+  {
+    System.out.println(" releasing lock:"+ lockName );
+  }
+
+}
+
+```
+
+LockFactory that creates the lock
+ 
+```
+public class LockFactory extends StateModelFactory<Lock>{
+    
+    /* Instantiates the lock handler, one per lockName*/
+    public Lock create(String lockName)
+    {
+        return new Lock(lockName);
+    }   
+}
+```
+
+At node start up, simply join the cluster and helix will invoke the appropriate callbacks on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
+
+```
+public class LockProcess{
+
+  public static void main(String args){
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    //Give a unique id to each process, most commonly used format hostname_port
+    String instanceName ="localhost_12000";
+    ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
+    //configure the instance and provide some metadata 
+    InstanceConfig config = new InstanceConfig(instanceName);
+    config.setHostName("localhost");
+    config.setPort("12000");
+    admin.addInstance(clusterName, config);
+    //join the cluster
+    HelixManager manager;
+    manager = HelixManagerFactory.getHelixManager(clusterName,
+                                                  instanceName,
+                                                  InstanceType.PARTICIPANT,
+                                                  zkAddress);
+    manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
+    manager.connect();
+    Thread.currentThread.join();
+    }
+
+}
+```
+
+##### Start the controller
+
+Controller can be started either as a separate process or can be embedded within each node process
+
+###### Separate process
+This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes.
+
+```
+./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
+```
+
+###### Embedded within the node process
+This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass
+
+```
+public class LockProcess{
+
+  public static void main(String args){
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    .
+    .
+    manager.connect();
+    HelixManager controller;
+    controller = HelixControllerMain.startHelixController(zkAddress, 
+                                                          clusterName,
+                                                          "controller", 
+                                                          HelixControllerMain.STANDALONE);
+    Thread.currentThread.join();
+  }
+}
+```
+
+----------------------------------------------------------------------------------------
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
new file mode 100644
index 0000000..9edc2cb
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rabbitmq_consumer_group.md
@@ -0,0 +1,227 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+RabbitMQ Consumer Group
+=======================
+
+[RabbitMQ](http://www.rabbitmq.com/) is a well known Open source software the provides robust messaging for applications.
+
+One of the commonly implemented recipes using this software is a work queue.  http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where
+
+* A producer sends a message with a routing key. 
+* The message is routed to the queue whose binding key exactly matches the routing key of the message.	
+* There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
+
+The example provided [here](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes how multiple consumers can be started to process all the messages.
+
+While this works, in production systems one needs the following 
+
+* Ability to handle failures: when a consumers fails another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer.
+* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers. 
+
+In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.
+
+Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition. 
+
+Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. 
+What we want is all 6 queues to be evenly divided among 2 consumers. 
+Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues.
+Now let's say that a consumer failed which reduces the number of active consumers to 2. This means each consumer must process 3 queues.
+
+We showcase how such a dynamic App can be developed using Helix. Even though we use rabbitmq as the pub/sub system one can extend this solution to other pub/sub systems.
+
+Try it
+======
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/rabbitmq-consumer-group/bin
+chmod +x *
+export HELIX_PKG_ROOT=`pwd`/helix-core/target/helix-core-pkg
+export HELIX_RABBITMQ_ROOT=`pwd`/recipes/rabbitmq-consumer-group/
+chmod +x $HELIX_PKG_ROOT/bin/*
+chmod +x $HELIX_RABBITMQ_ROOT/bin/*
+```
+
+
+Install Rabbit MQ
+----------------
+
+Setting up RabbitMQ on a local box is straightforward. You can find the instructions here
+http://www.rabbitmq.com/download.html
+
+Start ZK
+--------
+Start zookeeper at port 2199
+
+```
+$HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
+```
+
+Setup the consumer group cluster
+--------------------------------
+This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues. 
+
+```
+$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199 
+```
+
+Add consumers
+-------------
+Start 2 consumers in 2 different terminals. Each consumer is given a unique id.
+
+```
+//start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost 
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost 
+
+```
+
+Start HelixController
+--------------------
+Now start a Helix controller that starts managing the "rabbitmq-consumer-group" cluster.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
+```
+
+Send messages to the Topic
+--------------------------
+
+Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. 
+Based on the key, messages gets routed to the appropriate queue.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
+```
+
+After running this, you should see all 20 messages being processed by 2 consumers. 
+
+Add another consumer
+--------------------
+Once a new consumer is started, helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
+each consumer is now processing only 2 queues.
+Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
+```
+
+Send messages again to the topic.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
+```
+
+You should see that messages are now received by all 3 consumers.
+
+Stop a consumer
+---------------
+In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.
+
+
+How does it work
+================
+
+Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq). 
+ 
+Cluster setup
+-------------
+This step creates znode on zookeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
+
+It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to FULL_AUTO. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
+
+```
+      zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
+          ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
+      ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
+      
+      // add cluster
+      admin.addCluster(clusterName, true);
+
+      // add state model definition
+      StateModelConfigGenerator generator = new StateModelConfigGenerator();
+      admin.addStateModelDef(clusterName, "OnlineOffline",
+          new StateModelDefinition(generator.generateConfigForOnlineOffline()));
+
+      // add resource "topic" which has 6 partitions
+      String resourceName = "rabbitmq-consumer-group";
+      admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "FULL_AUTO");
+```
+
+Starting the consumers
+----------------------
+The only thing consumers need to know is the zkaddress, cluster name and consumer id. It does not need to know anything else.
+
+```
+   _manager =
+          HelixManagerFactory.getZKHelixManager(_clusterName,
+                                                _consumerId,
+                                                InstanceType.PARTICIPANT,
+                                                _zkAddr);
+
+      StateMachineEngine stateMach = _manager.getStateMachineEngine();
+      ConsumerStateModelFactory modelFactory =
+          new ConsumerStateModelFactory(_consumerId, _mqServer);
+      stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
+
+      _manager.connect();
+
+```
+
+Once the consumer has registered the statemodel and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. 
+As a part of this transition, the consumer will stop consuming from a that queue.
+
+```
+ @Transition(to = "ONLINE", from = "OFFLINE")
+  public void onBecomeOnlineFromOffline(Message message, NotificationContext context)
+  {
+    LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
+
+    if (_thread == null)
+    {
+      LOG.debug("Starting ConsumerThread for " + _partition + "...");
+      _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
+      _thread.start();
+      LOG.debug("Starting ConsumerThread for " + _partition + " done");
+
+    }
+  }
+
+  @Transition(to = "OFFLINE", from = "ONLINE")
+  public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
+      throws InterruptedException
+  {
+    LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
+
+    if (_thread != null)
+    {
+      LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+
+      _thread.interrupt();
+      _thread.join(2000);
+      _thread = null;
+      LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+
+    }
+  }
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
new file mode 100644
index 0000000..f8a74a0
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/rsync_replicated_file_store.md
@@ -0,0 +1,165 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Near real time rsync replicated file system
+===========================================
+
+Quickdemo
+---------
+
+* This demo starts 3 instances with id's as ```localhost_12001, localhost_12002, localhost_12003```
+* Each instance stores its files under ```/tmp/<id>/filestore```
+* ``` localhost_12001 ``` is designated as the master and ``` localhost_12002 and localhost_12003``` are the slaves.
+* Files written to master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and it gets replicated to other folders.
+* When the master is stopped, ```localhost_12002``` is promoted to master. 
+* The other slave ```localhost_12003``` stops replicating from ```localhost_12001``` and starts replicating from new master ```localhost_12002```
+* Files written to new master ```localhost_12002``` are replicated to ```localhost_12003```
+* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appears in ```/tmp/localhost_12003/filestore```
+* Ignore the interrupted exceptions on the console :-).
+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd recipes/rsync-replicated-file-system/
+mvn clean install package -DskipTests
+cd target/rsync-replicated-file-system-pkg/bin
+chmod +x *
+./quickdemo
+
+```
+
+Overview
+--------
+
+There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these usecases is also that files are typically only added or deleted, rarely updated. When there are updates, they are rare and do not have any concurrency requirements.
+
+These are much simpler requirements than what general purpose distributed file system have to satisfy including concurrent access to files, random access for reads and updates, posix compliance etc. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
+ 
+A different implementation of a distributed file system includes HDFS which is inspired by Google's GFS. This is one of the most widely used distributed file system that forms the main data storage platform for Hadoop. HDFS is primary aimed at processing very large data sets and distributes files across a cluster of commodity servers by splitting up files in fixed size chunks. HDFS is not particularly well suited for storing a very large number of relatively tiny files.
+
+### File Store
+
+It's possible to build a vastly simpler system for the class of applications that have simpler requirements as we have pointed out.
+
+* Large number of files but each file is relatively small.
+* Access is limited to create, delete and get entire files.
+* No updates to files that are already created (or it's feasible to delete the old file and create a new one).
+ 
+
+We call this system a Partitioned File Store (PFS) to distinguish it from other distributed file systems. This system needs to provide the following features:
+
+* CRD access to large number of small files
+* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement.
+* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability.
+* Elasticity: It should be possible to add capacity to the cluster easily.
+ 
+
+Apache Helix is a generic cluster management framework that makes it very easy to provide the scalability, fault-tolerance and elasticity features. 
+Rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
+
+Design
+------
+
+High level 
+
+* Partition the file system based on the file name. 
+* At any time a single writer can write, we call this a master.
+* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads.
+* Slave replicates data from the master.
+* When a master fails, slave gets promoted to master.
+
+### Transaction log
+
+Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order. 
+To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit id in which the 32 LSB represents a sequence number and MSB represents the generation number.
+Sequence gets incremented on every transaction and and generation is increment when a new master is elected. 
+
+### Replication
+
+Replication is required to slave to keep up with the changes on the master. Every time the slave applies a change it checkpoints the last applied transaction id. 
+During restarts, this allows the slave to pull changes from the last checkpointed id. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction id, it uses the same id generated by the master.
+
+
+### Fail over
+
+When a master fails, a new slave will be promoted to master. If the prev master node is reachable, then the new master will flush all the 
+changes from previous master before taking up mastership. The new master will record the end transaction id of the current generation and then starts new generation 
+with sequence starting from 1. After this the master will begin accepting writes. 
+
+
+![Partitioned File Store](../images/PFS-Generic.png)
+
+
+
+Rsync based solution
+-------------------
+
+![Rsync based File Store](../images/RSYNC_BASED_PFS.png)
+
+
+This application demonstrate a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, 
+can implement a custom solution to notify the slave of the changes and also provide an api to pull the change files.
+#### Concept
+* file_store_dir: Root directory for the actual data files 
+* change_log_dir: The transaction logs are generated under this folder.
+* check_point_dir: The slave stores the check points ( last processed transaction) here.
+
+#### Master
+* File server: This component support file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. Idea is that most applications have different ways of implementing this component and has some business logic associated with it. It is not hard to come up with such a component if needed.
+* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes.
+* Change Log Generator: This registers as a listener of File System Watcher and on each notification logs the changes into a file under ```change_log_dir```. 
+
+####Slave
+* File server: This component on the slave will only support reads.
+* Cluster state observer: Slave observes the cluster state and is able to know who is the current master. 
+* Replicator: This has two subcomponents
+    - Periodic rsync of change log: This is a background process that periodically rsyncs the ```change_log_dir``` of the master to its local directory
+    - Change Log Watcher: This watches the ```change_log_dir``` for changes and notifies the registered listeners of the change
+    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file.
+
+
+#### Coordination
+
+The coordination between nodes is done by Helix. Helix does the partition management and assigns the partition to multiple nodes based on the replication factor. It elects one the nodes as master and designates others as slaves.
+It provides notifications to each node in the form of state transitions ( Offline to Slave, Slave to Master). It also provides notification when there is change is cluster state. 
+This allows the slave to stop replicating from current master and start replicating from new master. 
+
+In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically 
+re-distribute partitions among the nodes. To summarize, Helix provides partition management, fault tolerance and facilitates automated cluster expansion.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
new file mode 100644
index 0000000..8e06ead
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/service_discovery.md
@@ -0,0 +1,191 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Service Discovery
+-----------------
+
+One of the common usage of zookeeper is enable service discovery. 
+The basic idea is that when a server starts up it advertises its configuration/metadata such as host name port etc on zookeeper. 
+This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and 
+is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to vips.
+
+The core idea behind this is to use zookeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a znode. 
+When the server shutdowns, zookeeper automatically removes this znode. 
+
+There are two ways the clients can dynamically discover the active servers
+
+#### ZOOKEEPER WATCH
+
+Clients can set a child watch under specific path on zookeeper. 
+When a new service is registered/deregistered, zookeeper notifies the client via watchevent and the client can read the list of services. Even though this looks trivial, 
+there are lot of things one needs to keep in mind like ensuring that you first set the watch back on zookeeper before reading data from zookeeper.
+
+
+#### POLL
+
+Another approach is for the client to periodically read the zookeeper path and get the list of services.
+
+
+Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is worst especially when servers are starting up. 
+But good thing about setting watch is that clients are immediately notified of a change which is not true in case of polling. 
+In some cases, having both WATCH and POLL makes sense, WATCH allows one to get notifications as soon as possible while POLL provides a safety net if a watch event is missed because of code bug or zookeeper fails to notify.
+
+##### Other important scenarios to take care of
+* What happens when zookeeper session expires. All the watches/ephemeral nodes previously added/created by this server are lost. 
+One needs to add the watches again , recreate the ephemeral nodes etc.
+* Due to network issues or java GC pauses session expiry might happen again and again also known as flapping. Its important for the server to detect this and deregister itself.
+
+##### Other operational things to consider
+* What if the node is behaving badly, one might kill the server but will lose the ability to debug. 
+It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
+ 
+#### Configuration ownership
+
+This is an important aspect that is often ignored in the initial stages of your development. In common, service discovery pattern means that servers start up with some configuration and then simply puts its configuration/metadata in zookeeper. While this works well in the beginning, 
+configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting of the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server. 
+
+Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from zookeeper.
+
+h3. How to use Helix to achieve this
+
+Even though Helix has higher level abstraction in terms of statemachine, constraints and objectives, 
+service discovery is one of things that existed since we started. 
+The controller uses the exact mechanism we described above to discover when new servers join the cluster.
+We create these znodes under /CLUSTERNAME/LIVEINSTANCES. 
+Since at any time there is only one controller, we use ZK watch to track the liveness of a server.
+
+This recipe, simply demonstrate how one can re-use that part for implementing service discovery. This demonstrates multiple MODE's of service discovery
+
+* POLL: The client reads from zookeeper at regular intervals 30 seconds. Use this if you have 100's of clients
+* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients.
+* NONE: This does neither of the above, but reads directly from zookeeper when ever needed.
+
+Helix provides these additional features compared to other implementations available else where
+
+* It has the concept of disabling a node which means that a badly behaving node, can be disabled using helix admin api.
+* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node.
+* Configuration management  
+    * Allows one to set configuration via admin api at various granulaties like cluster, instance, resource, partition 
+    * Configuration can be dynamically changed.
+    * Notifies the server when configuration changes.
+
+
+##### checkout and build
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/service-discovery/target/service-discovery-pkg/bin
+chmod +x *
+```
+
+##### start zookeeper
+
+```
+./start-standalone-zookeeper 2199
+```
+
+#### Run the demo
+
+```
+./service-discovery-demo.sh
+```
+
+#### Output
+
+```
+START:Service discovery demo mode:WATCH
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12002
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12002
+END:Service discovery demo mode:WATCH
+=============================================
+START:Service discovery demo mode:POLL
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12002
+	Sleeping for poll interval:30000
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12002
+END:Service discovery demo mode:POLL
+=============================================
+START:Service discovery demo mode:NONE
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12000
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12000
+END:Service discovery demo mode:NONE
+=============================================
+
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md b/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
new file mode 100644
index 0000000..f0474e4
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/recipes/task_dag_execution.md
@@ -0,0 +1,204 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Distributed task execution
+
+
+This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with desired parallelism and will start only when up-stream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of 
+re-assigning a failed task partition to a new worker. 
+
+Redis is used as a result store. Any other suitable implementation for TaskResultStore can be plugged in.
+
+### Workflow 
+
+
+#### Input 
+
+10000 impression events and around 100 click events are pre-populated in task result store (redis). 
+
+* **ImpEvent**: format: id,isFraudulent,country,gender
+
+* **ClickEvent**: format: id,isFraudulent,impEventId
+
+#### Stages
+
++ **FilterImps**: Filters impression where isFraudulent=true.
+
++ **FilterClicks**: Filters clicks where isFraudulent=true
+
++ **impCountsByGender**: Generates impression counts grouped by gender. It does this by incrementing the count for 'impression_gender_counts:<gender_value>' in the task result store (redis hash). Depends on: **FilterImps**
+
++ **impCountsByCountry**: Generates impression counts grouped by country. It does this by incrementing the count for 'impression_country_counts:<country_value>' in the task result store (redis hash). Depends on: **FilterClicks**
+
++ **impClickJoin**: Joins clicks with corresponding impression event using impEventId as the join key. Join is needed to pull dimensions not present in click event. Depends on: **FilterImps, FilterClicks**
+
++ **clickCountsByGender**: Generates click counts grouped by gender. It does this by incrementing the count for click_gender_counts:<gender_value> in the task result store (redis hash). Depends on: **impClickJoin**
+
++ **clickCountsByGender**: Generates click counts grouped by country. It does this by incrementing the count for click_country_counts:<country_value> in the task result store (redis hash). Depends on: **impClickJoin**
+
++ **report**: Reads from all aggregates generated by previous stages and prints them. Depends on: **impCountsByGender, impCountsByCountry, clickCountsByGender,clickCountsByGender**
+
+
+### Creating DAG
+
+Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modelled as a resource in Helix using OnlineOffline state model. As part of Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to online state. See Task.java for additional info.
+
+```
+
+  Dag dag = new Dag();
+  dag.addNode(new Node("filterImps", 10, ""));
+  dag.addNode(new Node("filterClicks", 5, ""));
+  dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
+  dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
+  dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
+  dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
+  dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));		
+  dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
+
+
+```
+
+### DEMO
+
+In order to run the demo, use the following steps
+
+See http://redis.io/topics/quickstart on how to install redis server
+
+```
+
+Start redis e.g:
+./redis-server --port 6379
+
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd recipes/task-execution
+mvn clean install package -DskipTests
+cd target/task-execution-pkg/bin
+chmod +x task-execution-demo.sh
+./task-execution-demo.sh 2181 localhost 6379 
+
+```
+
+```
+
+
+
+
+
+                       +-----------------+       +----------------+
+                       |   filterImps    |       |  filterClicks  |
+                       | (parallelism=10)|       | (parallelism=5)|
+                       +----------+-----++       +-------+--------+
+                       |          |     |                |
+                       |          |     |                |
+                       |          |     |                |
+                       |          |     +------->--------v------------+
+      +--------------<-+   +------v-------+    |  impClickJoin        |
+      |impCountsByGender   |impCountsByCountry | (parallelism=10)     |
+      |(parallelism=10)    |(parallelism=10)   ++-------------------+-+
+      +-----------+--+     +---+----------+     |                   |
+                  |            |                |                   |
+                  |            |                |                   |
+                  |            |       +--------v---------+       +-v-------------------+
+                  |            |       |clickCountsByGender       |clickCountsByCountry |
+                  |            |       |(parallelism=5)   |       |(parallelism=5)      |
+                  |            |       +----+-------------+       +---------------------+
+                  |            |            |                     |
+                  |            |            |                     |
+                  |            |            |                     |
+                  +----->+-----+>-----------v----+<---------------+
+                         | report                |
+                         |(parallelism=1)        |
+                         +-----------------------+
+
+```
+
+(credit for above ascii art: http://www.asciiflow.com)
+
+### OUTPUT
+
+```
+Done populating dummy data
+Executing filter task for filterImps_3 for impressions_demo
+Executing filter task for filterImps_2 for impressions_demo
+Executing filter task for filterImps_0 for impressions_demo
+Executing filter task for filterImps_1 for impressions_demo
+Executing filter task for filterImps_4 for impressions_demo
+Executing filter task for filterClicks_3 for clicks_demo
+Executing filter task for filterClicks_1 for clicks_demo
+Executing filter task for filterImps_8 for impressions_demo
+Executing filter task for filterImps_6 for impressions_demo
+Executing filter task for filterClicks_2 for clicks_demo
+Executing filter task for filterClicks_0 for clicks_demo
+Executing filter task for filterImps_7 for impressions_demo
+Executing filter task for filterImps_5 for impressions_demo
+Executing filter task for filterClicks_4 for clicks_demo
+Executing filter task for filterImps_9 for impressions_demo
+Running AggTask for impCountsByGender_3 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_2 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_0 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_9 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_1 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_4 for filtered_impressions_demo gender
+Running AggTask for impCountsByCountry_4 for filtered_impressions_demo country
+Running AggTask for impCountsByGender_5 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_2
+Running AggTask for impCountsByCountry_3 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_1 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_0 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_2 for filtered_impressions_demo country
+Running AggTask for impCountsByGender_6 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_1
+Executing JoinTask for impClickJoin_0
+Executing JoinTask for impClickJoin_3
+Running AggTask for impCountsByGender_8 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_4
+Running AggTask for impCountsByGender_7 for filtered_impressions_demo gender
+Running AggTask for impCountsByCountry_5 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_6 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_9
+Running AggTask for impCountsByCountry_8 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_7 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_5
+Executing JoinTask for impClickJoin_6
+Running AggTask for impCountsByCountry_9 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_8
+Executing JoinTask for impClickJoin_7
+Running AggTask for clickCountsByCountry_1 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_0 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_2 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_3 for joined_clicks_demo country
+Running AggTask for clickCountsByGender_1 for joined_clicks_demo gender
+Running AggTask for clickCountsByCountry_4 for joined_clicks_demo country
+Running AggTask for clickCountsByGender_3 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_2 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_4 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_0 for joined_clicks_demo gender
+Running reports task
+Impression counts per country
+{CANADA=1940, US=1958, CHINA=2014, UNKNOWN=2022, UK=1946}
+Click counts per country
+{US=24, CANADA=14, CHINA=26, UNKNOWN=14, UK=22}
+Impression counts per gender
+{F=3325, UNKNOWN=3259, M=3296}
+Click counts per gender
+{F=33, UNKNOWN=32, M=35}
+
+
+```
+


[05/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Tutorial.md b/site-releases/trunk/src/site/markdown/Tutorial.md
new file mode 100644
index 0000000..ee5a393
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/Tutorial.md
@@ -0,0 +1,284 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial</title>
+</head>
+
+# Helix Tutorial
+
+In this tutorial, we will cover the roles of a Helix-managed cluster, and show the code you need to write to integrate with it.  In many cases, there is a simple default behavior that is often appropriate, but you can also customize the behavior.
+
+Convention: we first cover the _basic_ approach, which is the easiest to implement.  Then, we'll describe _advanced_ options, which give you more control over the system behavior, but require you to write more code.
+
+
+### Prerequisites
+
+1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
+2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
+3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
+
+### Tutorial Outline
+
+1. [Participant](./tutorial_participant.html)
+2. [Spectator](./tutorial_spectator.html)
+3. [Controller](./tutorial_controller.html)
+4. [Rebalancing Algorithms](./tutorial_rebalance.html)
+5. [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
+6. [State Machines](./tutorial_state.html)
+7. [Messaging](./tutorial_messaging.html)
+8. [Customized health check](./tutorial_health.html)
+9. [Throttling](./tutorial_throttling.html)
+10. [Application Property Store](./tutorial_propstore.html)
+11. [Logical Accessors](./tutorial_accessors.html)
+12. [Admin Interface](./tutorial_admin.html)
+13. [YAML Cluster Setup](./tutorial_yaml.html)
+
+### Preliminaries
+
+First, we need to set up the system.  Let\'s walk through the steps in building a distributed system using Helix. We will show how to do this using both the Java admin interface, as well as the [cluster accessor](./tutorial_accessors.html) interface. You can choose either interface depending on which most closely matches your needs.
+
+### Start Zookeeper
+
+This starts a zookeeper in standalone mode. For production deployment, see [Apache Zookeeper](http://zookeeper.apache.org) for instructions.
+
+```
+    ./start-standalone-zookeeper.sh 2199 &
+```
+
+### Create a cluster
+
+Creating a cluster will define the cluster in appropriate znodes on zookeeper.   
+
+Using the Java accessor API:
+
+```
+// Note: ZK_ADDRESS is the host:port of Zookeeper
+String ZK_ADDRESS = "localhost:2199";
+HelixConnection connection = new ZKHelixConnection(ZK_ADDRESS);
+
+ClusterId clusterId = ClusterId.from("helix-demo");
+ClusterAccessor clusterAccessor = connection.createClusterAccessor(clusterId);
+ClusterConfig clusterConfig = new ClusterConfig.Builder(clusterId).build();
+clusterAccessor.createCluster(clusterConfig);
+```
+
+OR
+
+Using the HelixAdmin Java interface:
+
+```
+// Create setup tool instance
+// Note: ZK_ADDRESS is the host:port of Zookeeper
+String ZK_ADDRESS = "localhost:2199";
+HelixAdmin admin = new ZKHelixAdmin(ZK_ADDRESS);
+
+String CLUSTER_NAME = "helix-demo";
+//Create cluster namespace in zookeeper
+admin.addCluster(CLUSTER_NAME);
+```
+
+OR
+
+Using the command-line interface:
+
+```
+    ./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo 
+```
+
+
+### Configure the nodes of the cluster
+
+First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable. 
+The most commonly used convention is hostname_port.
+
+```
+int NUM_NODES = 2;
+String hosts[] = new String[]{"localhost","localhost"};
+int ports[] = new int[]{7000,7001};
+for (int i = 0; i < NUM_NODES; i++)
+{
+  ParticipantId participantId = ParticipantId.from(hosts[i] + "_" + ports[i]);
+
+  // set additional configuration for the participant; these can be accessed during node start up
+  UserConfig userConfig = new UserConfig(Scope.participant(participantId));
+  userConfig.setSimpleField("key", "value");
+
+  // configure and add the participant
+  ParticipantConfig participantConfig = new ParticipantConfig.Builder(participantId)
+      .hostName(hosts[i]).port(ports[i]).enabled(true).userConfig(userConfig).build();
+  clusterAccessor.addParticipantToCluster(participantConfig);
+}
+```
+
+OR
+
+Using the HelixAdmin Java interface:
+
+```
+String CLUSTER_NAME = "helix-demo";
+int NUM_NODES = 2;
+String hosts[] = new String[]{"localhost","localhost"};
+String ports[] = new String[]{7000,7001};
+for (int i = 0; i < NUM_NODES; i++)
+{
+  InstanceConfig instanceConfig = new InstanceConfig(hosts[i] + "_" + ports[i]);
+  instanceConfig.setHostName(hosts[i]);
+  instanceConfig.setPort(ports[i]);
+  instanceConfig.setInstanceEnabled(true);
+
+  //Add additional system specific configuration if needed. These can be accessed during the node start up.
+  instanceConfig.getRecord().setSimpleField("key", "value");
+  admin.addInstance(CLUSTER_NAME, instanceConfig);
+}
+```
+
+### Configure the resource
+
+A _resource_ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
+A _resource_ can be divided into many sub-parts known as _partitions_.
+
+
+#### Define the _state model_ and _constraints_
+
+For scalability and fault tolerance, each partition can have one or more replicas. 
+The _state model_ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
+A simple model is ONLINE-OFFLINE where ONLINE means the task is active and OFFLINE means it\'s not active.
+You can also specify how many replicas must be in each state, these are known as _constraints_.
+For example, in a search system, one might need more than one node serving the same index to handle the load.
+
+The allowed states: 
+
+* MASTER
+* SLAVE
+* OFFLINE
+
+The allowed transitions: 
+
+* OFFLINE to SLAVE
+* SLAVE to OFFLINE
+* SLAVE to MASTER
+* MASTER to SLAVE
+
+The constraints:
+
+* no more than 1 MASTER per partition
+* the rest of the replicas should be slaves
+
+The following snippet shows how to declare the _state model_ and _constraints_ for the MASTER-SLAVE model.
+
+```
+StateModelDefinition.Builder builder = new StateModelDefinition.Builder(STATE_MODEL_NAME);
+
+// Add states and their rank to indicate priority. A lower rank corresponds to a higher priority
+builder.addState(MASTER, 1);
+builder.addState(SLAVE, 2);
+builder.addState(OFFLINE);
+
+// Set the initial state when the node starts
+builder.initialState(OFFLINE);
+
+// Add transitions between the states.
+builder.addTransition(OFFLINE, SLAVE);
+builder.addTransition(SLAVE, OFFLINE);
+builder.addTransition(SLAVE, MASTER);
+builder.addTransition(MASTER, SLAVE);
+
+// set constraints on states.
+
+// static constraint: upper bound of 1 MASTER
+builder.upperBound(MASTER, 1);
+
+// dynamic constraint: R means it should be derived based on the replication factor for the cluster
+// this allows a different replication factor for each resource without 
+// having to define a new state model
+//
+builder.dynamicUpperBound(SLAVE, "R");
+StateModelDefinition statemodelDefinition = builder.build();
+```
+
+Then, add the state model definition:
+
+```
+clusterAccessor.addStateModelDefinitionToCluster(stateModelDefinition);
+```
+
+OR
+
+```
+admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, stateModelDefinition);
+```
+
+#### Assigning partitions to nodes
+
+The final goal of Helix is to ensure that the constraints on the state model are satisfied. 
+Helix does this by assigning a STATE to a partition (such as MASTER, SLAVE), and placing it on a particular node.
+
+There are 3 assignment modes Helix can operate on
+
+* FULL_AUTO: Helix decides the placement and state of a partition.
+* SEMI_AUTO: Application decides the placement but Helix decides the state of a partition.
+* CUSTOMIZED: Application controls the placement and state of a partition.
+
+For more info on the assignment modes, see [Rebalancing Algorithms](./tutorial_rebalance.html) section of the tutorial.
+
+Here is an example of adding the resource in SEMI_AUTO mode (i.e. locations of partitions are specified a priori):
+
+```
+int NUM_PARTITIONS = 6;
+int NUM_REPLICAS = 2;
+ResourceId resourceId = resourceId.from("MyDB");
+
+SemiAutoRebalancerContext context = new SemiAutoRebalancerContext.Builder(resourceId)
+  .replicaCount(NUM_REPLICAS).addPartitions(NUM_PARTITIONS)
+  .stateModelDefId(stateModelDefinition.getStateModelDefId())
+  .addPreferenceList(partition1Id, preferenceList) // preferred locations of each partition
+  // add other preference lists per partition
+  .build();
+
+// or add all preference lists at once if desired (map of PartitionId to List of ParticipantId)
+context.setPreferenceLists(preferenceLists);
+
+// or generate a default set of preference lists given the set of all participants
+context.generateDefaultConfiguration(stateModelDefinition, participantIdSet);
+```
+
+OR
+
+```
+String RESOURCE_NAME = "MyDB";
+int NUM_PARTITIONS = 6;
+String MODE = "SEMI_AUTO";
+int NUM_REPLICAS = 2;
+
+admin.addResource(CLUSTER_NAME, RESOURCE_NAME, NUM_PARTITIONS, STATE_MODEL_NAME, MODE);
+
+// specify the preference lists yourself
+IdealState idealState = admin.getResourceIdealState(CLUSTER_NAME, RESOURCE_NAME);
+idealState.setPreferenceList(partitionId, preferenceList); // preferred locations of each partition
+// add other preference lists per partition
+
+// or add all preference lists at once if desired
+idealState.getRecord().setListFields(preferenceLists);
+admin.setResourceIdealState(CLUSTER_NAME, RESOURCE_NAME, idealState);
+
+// or generate a default set of preference lists 
+admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/UseCases.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/UseCases.md b/site-releases/trunk/src/site/markdown/UseCases.md
new file mode 100644
index 0000000..001b012
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/UseCases.md
@@ -0,0 +1,113 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Use Cases</title>
+</head>
+
+
+# Use cases at LinkedIn
+
+At LinkedIn Helix framework is used to manage 3 distributed data systems which are quite different from each other.
+
+* Espresso
+* Databus
+* Search As A Service
+
+## Espresso
+
+Espresso is a distributed, timeline consistent, scal- able, document store that supports local secondary indexing and local transactions. 
+Espresso databases are horizontally partitioned into a number of partitions, with each partition having a certain number of replicas 
+distributed across the storage nodes.
+Espresso designates one replica of each partition as master and the rest as slaves; only one master may exist for each partition at any time.
+Espresso enforces timeline consistency where only the master of a partition can accept writes to its records, and all slaves receive and 
+apply the same writes through a replication stream. 
+For load balancing, both master and slave partitions are assigned evenly across all storage nodes. 
+For fault tolerance, it adds the constraint that no two replicas of the same partition may be located on the same node.
+
+### State model
+Espresso follows a Master-Slave state model. A replica can be in Offline,Slave or Master state. 
+The state machine table describes the next state given the Current State, Final State
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+### Constraints
+* Max number of replicas in Master state:1
+* Execution mode AUTO. i.e on node failure no new replicas will be created. Only the State of remaining replicas will be changed.
+* Number of mastered partitions on each node must be approximately same.
+* The above constraint must be satisfied when a node fails or a new node is added.
+* When new nodes are added the number of partitions moved must be minimized.
+* When new nodes are added the max number of OFFLINE-SLAVE transitions that can happen concurrently on new node is X.
+
+## Databus
+
+Databus is a change data capture (CDC) system that provides a common pipeline for transporting events 
+from LinkedIn primary databases to caches within various applications.
+Databus deploys a cluster of relays that pull the change log from multiple databases and 
+let consumers subscribe to the change log stream. Each Databus relay connects to one or more database servers and 
+hosts a certain subset of databases (and partitions) from those database servers. 
+
+For a large partitioned database (e.g. Espresso), the change log is consumed by a bank of consumers. 
+Each databus partition is assigned to a consumer such that partitions are evenly distributed across consumers and each partition is
+assigned to exactly one consumer at a time. The set of consumers may grow over time, and consumers may leave the group due to planned or unplanned 
+outages. In these cases, partitions must be reassigned, while maintaining balance and the single consumer-per-partition invariant.
+
+### State model
+Databus consumers follow a simple Offline-Online state model.
+The state machine table describes the next state given the Current State, Final State
+
+<pre><code>
+          OFFLINE  | ONLINE |   
+         ___________________|
+        |          |        |
+OFFLINE |   N/A    | ONLINE |
+        |__________|________|
+        |          |        |
+ONLINE  |  OFFLINE |   N/A  |
+        |__________|________|
+
+
+</code></pre>
+
+
+## Search As A Service
+
+LinkedIn�s Search-as-a-service lets internal customers define custom indexes on a chosen dataset 
+and then makes those indexes searchable via a service API. The index service runs on a cluster of machines. 
+The index is broken into partitions and each partition has a configured number of replicas.
+Each cluster server runs an instance of the Sensei system (an online index store) and hosts index partitions. 
+Each new indexing service gets assigned to a set of servers, and the partition replicas must be evenly distributed across those servers.
+
+### State model
+![Helix Design](images/bootstrap_statemodel.gif) 
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/index.md b/site-releases/trunk/src/site/markdown/index.md
new file mode 100644
index 0000000..2eae374
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/index.md
@@ -0,0 +1,56 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Home</title>
+</head>
+
+Navigating the Documentation
+----------------------------
+
+### Conceptual Understanding
+
+[Concepts / Terminology](./Concepts.html)
+
+[Architecture](./Architecture.html)
+
+### Hands-on Helix
+
+[Getting Helix](./Building.html)
+
+[Quickstart](./Quickstart.html)
+
+[Tutorial](./Tutorial.html)
+
+[Javadocs](http://helix.incubator.apache.org/apidocs)
+
+### Recipes
+
+[Distributed lock manager](./recipes/lock_manager.html)
+
+[Rabbit MQ consumer group](./recipes/rabbitmq_consumer_group.html)
+
+[Rsync replicated file store](./recipes/rsync_replicated_file_store.html)
+
+[Service discovery](./recipes/service_discovery.html)
+
+[Distributed Task DAG Execution](./recipes/task_dag_execution.html)
+
+[User-Defined Rebalancer Example](./recipes/user_def_rebalancer.html)
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/lock_manager.md b/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
new file mode 100644
index 0000000..252ace7
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/recipes/lock_manager.md
@@ -0,0 +1,253 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Distributed lock manager
+------------------------
+Distributed locks are used to synchronize accesses shared resources. Most applications use Zookeeper to model the distributed locks. 
+
+The simplest way to model a lock using zookeeper is (See Zookeeper leader recipe for an exact and more advanced solution)
+
+* Each process tries to create an emphemeral node.
+* If can successfully create it then, it acquires the lock
+* Else it will watch on the znode and try to acquire the lock again if the current lock holder disappears 
+
+This is good enough if there is only one lock. But in practice, an application will need many such locks. Distributing and managing the locks among difference process becomes challenging. Extending such a solution to many locks will result in
+
+* Uneven distribution of locks among nodes, the node that starts first will acquire all the lock. Nodes that start later will be idle.
+* When a node fails, how the locks will be distributed among remaining nodes is not predicable. 
+* When new nodes are added the current nodes dont relinquish the locks so that new nodes can acquire some locks
+
+In other words we want a system to satisfy the following requirements.
+
+* Distribute locks evenly among all nodes to get better hardware utilization
+* If a node fails, the locks that were acquired by that node should be evenly distributed among other nodes
+* If nodes are added, locks must be evenly re-distributed among nodes.
+
+Helix provides a simple and elegant solution to this problem. Simply specify the number of locks and Helix will ensure that above constraints are satisfied. 
+
+To quickly see this working run the lock-manager-demo script where 12 locks are evenly distributed among three nodes, and when a node fails, the locks get re-distributed among remaining two nodes. Note that Helix does not re-shuffle the locks completely, instead it simply distributes the locks relinquished by dead node among 2 remaining nodes evenly.
+
+----------------------------------------------------------------------------------------
+
+#### Short version
+ This version starts multiple threads with in same process to simulate a multi node deployment. Try the long version to get a better idea of how it works.
+ 
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/distributed-lock-manager/target/distributed-lock-manager-pkg/bin
+chmod +x *
+./lock-manager-demo
+```
+
+##### Output
+
+```
+./lock-manager-demo 
+STARTING localhost_12000
+STARTING localhost_12002
+STARTING localhost_12001
+STARTED localhost_12000
+STARTED localhost_12002
+STARTED localhost_12001
+localhost_12001 acquired lock:lock-group_3
+localhost_12000 acquired lock:lock-group_8
+localhost_12001 acquired lock:lock-group_2
+localhost_12001 acquired lock:lock-group_4
+localhost_12002 acquired lock:lock-group_1
+localhost_12002 acquired lock:lock-group_10
+localhost_12000 acquired lock:lock-group_7
+localhost_12001 acquired lock:lock-group_5
+localhost_12002 acquired lock:lock-group_11
+localhost_12000 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_0
+localhost_12000 acquired lock:lock-group_9
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12000
+lock-group_7    localhost_12000
+lock-group_8    localhost_12000
+lock-group_9    localhost_12000
+Stopping localhost_12000
+localhost_12000 Interrupted
+localhost_12001 acquired lock:lock-group_9
+localhost_12001 acquired lock:lock-group_8
+localhost_12002 acquired lock:lock-group_6
+localhost_12002 acquired lock:lock-group_7
+lockName    acquired By
+======================================
+lock-group_0    localhost_12002
+lock-group_1    localhost_12002
+lock-group_10    localhost_12002
+lock-group_11    localhost_12002
+lock-group_2    localhost_12001
+lock-group_3    localhost_12001
+lock-group_4    localhost_12001
+lock-group_5    localhost_12001
+lock-group_6    localhost_12002
+lock-group_7    localhost_12002
+lock-group_8    localhost_12001
+lock-group_9    localhost_12001
+
+```
+
+----------------------------------------------------------------------------------------
+
+#### Long version
+This provides more details on how to setup the cluster and where to plugin application code.
+
+##### start zookeeper
+
+```
+./start-standalone-zookeeper 2199
+```
+
+##### Create a cluster
+
+```
+./helix-admin --zkSvr localhost:2199 --addCluster lock-manager-demo
+```
+
+##### Create a lock group
+
+Create a lock group and specify the number of locks in the lock group. 
+
+```
+./helix-admin --zkSvr localhost:2199  --addResource lock-manager-demo lock-group 6 OnlineOffline FULL_AUTO
+```
+
+##### Start the nodes
+
+Create a Lock class that handles the callbacks. 
+
+```
+
+public class Lock extends StateModel
+{
+  private String lockName;
+
+  public Lock(String lockName)
+  {
+    this.lockName = lockName;
+  }
+
+  public void lock(Message m, NotificationContext context)
+  {
+    System.out.println(" acquired lock:"+ lockName );
+  }
+
+  public void release(Message m, NotificationContext context)
+  {
+    System.out.println(" releasing lock:"+ lockName );
+  }
+
+}
+
+```
+
+LockFactory that creates the lock
+ 
+```
+public class LockFactory extends StateModelFactory<Lock>{
+    
+    /* Instantiates the lock handler, one per lockName*/
+    public Lock create(String lockName)
+    {
+        return new Lock(lockName);
+    }   
+}
+```
+
+At node start up, simply join the cluster and helix will invoke the appropriate callbacks on Lock instance. One can start any number of nodes and Helix detects that a new node has joined the cluster and re-distributes the locks automatically.
+
+```
+public class LockProcess{
+
+  public static void main(String args){
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    //Give a unique id to each process, most commonly used format hostname_port
+    String instanceName ="localhost_12000";
+    ZKHelixAdmin helixAdmin = new ZKHelixAdmin(zkAddress);
+    //configure the instance and provide some metadata 
+    InstanceConfig config = new InstanceConfig(instanceName);
+    config.setHostName("localhost");
+    config.setPort("12000");
+    admin.addInstance(clusterName, config);
+    //join the cluster
+    HelixManager manager;
+    manager = HelixManagerFactory.getHelixManager(clusterName,
+                                                  instanceName,
+                                                  InstanceType.PARTICIPANT,
+                                                  zkAddress);
+    manager.getStateMachineEngine().registerStateModelFactory("OnlineOffline", modelFactory);
+    manager.connect();
+    Thread.currentThread.join();
+    }
+
+}
+```
+
+##### Start the controller
+
+Controller can be started either as a separate process or can be embedded within each node process
+
+###### Separate process
+This is recommended when number of nodes in the cluster >100. For fault tolerance, you can run multiple controllers on different boxes.
+
+```
+./run-helix-controller --zkSvr localhost:2199 --cluster lock-manager-demo 2>&1 > /tmp/controller.log &
+```
+
+###### Embedded within the node process
+This is recommended when the number of nodes in the cluster is less than 100. To start a controller from each process, simply add the following lines to MyClass
+
+```
+public class LockProcess{
+
+  public static void main(String args){
+    String zkAddress= "localhost:2199";
+    String clusterName = "lock-manager-demo";
+    .
+    .
+    manager.connect();
+    HelixManager controller;
+    controller = HelixControllerMain.startHelixController(zkAddress, 
+                                                          clusterName,
+                                                          "controller", 
+                                                          HelixControllerMain.STANDALONE);
+    Thread.currentThread.join();
+  }
+}
+```
+
+----------------------------------------------------------------------------------------
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md b/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
new file mode 100644
index 0000000..9edc2cb
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/recipes/rabbitmq_consumer_group.md
@@ -0,0 +1,227 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+RabbitMQ Consumer Group
+=======================
+
+[RabbitMQ](http://www.rabbitmq.com/) is a well known Open source software the provides robust messaging for applications.
+
+One of the commonly implemented recipes using this software is a work queue.  http://www.rabbitmq.com/tutorials/tutorial-four-java.html describes the use case where
+
+* A producer sends a message with a routing key. 
+* The message is routed to the queue whose binding key exactly matches the routing key of the message.	
+* There are multiple consumers and each consumer is interested in processing only a subset of the messages by binding to the interested keys
+
+The example provided [here](http://www.rabbitmq.com/tutorials/tutorial-four-java.html) describes how multiple consumers can be started to process all the messages.
+
+While this works, in production systems one needs the following 
+
+* Ability to handle failures: when a consumers fails another consumer must be started or the other consumers must start processing these messages that should have been processed by the failed consumer.
+* When the existing consumers cannot keep up with the task generation rate, new consumers will be added. The tasks must be redistributed among all the consumers. 
+
+In this recipe, we demonstrate handling of consumer failures and new consumer additions using Helix.
+
+Mapping this usecase to Helix is pretty easy as the binding key/routing key is equivalent to a partition. 
+
+Let's take an example. Lets say the queue has 6 partitions, and we have 2 consumers to process all the queues. 
+What we want is all 6 queues to be evenly divided among 2 consumers. 
+Eventually when the system scales, we add more consumers to keep up. This will make each consumer process tasks from 2 queues.
+Now let's say that a consumer failed which reduces the number of active consumers to 2. This means each consumer must process 3 queues.
+
+We showcase how such a dynamic App can be developed using Helix. Even though we use rabbitmq as the pub/sub system one can extend this solution to other pub/sub systems.
+
+Try it
+======
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/rabbitmq-consumer-group/bin
+chmod +x *
+export HELIX_PKG_ROOT=`pwd`/helix-core/target/helix-core-pkg
+export HELIX_RABBITMQ_ROOT=`pwd`/recipes/rabbitmq-consumer-group/
+chmod +x $HELIX_PKG_ROOT/bin/*
+chmod +x $HELIX_RABBITMQ_ROOT/bin/*
+```
+
+
+Install Rabbit MQ
+----------------
+
+Setting up RabbitMQ on a local box is straightforward. You can find the instructions here
+http://www.rabbitmq.com/download.html
+
+Start ZK
+--------
+Start zookeeper at port 2199
+
+```
+$HELIX_PKG_ROOT/bin/start-standalone-zookeeper 2199
+```
+
+Setup the consumer group cluster
+--------------------------------
+This will setup the cluster by creating a "rabbitmq-consumer-group" cluster and adds a "topic" with "6" queues. 
+
+```
+$HELIX_RABBITMQ_ROOT/bin/setup-cluster.sh localhost:2199 
+```
+
+Add consumers
+-------------
+Start 2 consumers in 2 different terminals. Each consumer is given a unique id.
+
+```
+//start-consumer.sh zookeeperAddress (e.g. localhost:2181) consumerId , rabbitmqServer (e.g. localhost)
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 0 localhost 
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 1 localhost 
+
+```
+
+Start HelixController
+--------------------
+Now start a Helix controller that starts managing the "rabbitmq-consumer-group" cluster.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/start-cluster-manager.sh localhost:2199
+```
+
+Send messages to the Topic
+--------------------------
+
+Start sending messages to the topic. This script randomly selects a routing key (1-6) and sends the message to topic. 
+Based on the key, messages gets routed to the appropriate queue.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 20
+```
+
+After running this, you should see all 20 messages being processed by 2 consumers. 
+
+Add another consumer
+--------------------
+Once a new consumer is started, helix detects it. In order to balance the load between 3 consumers, it deallocates 1 partition from the existing consumers and allocates it to the new consumer. We see that
+each consumer is now processing only 2 queues.
+Helix makes sure that old nodes are asked to stop consuming before the new consumer is asked to start consuming for a given partition. But the transitions for each partition can happen in parallel.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/start-consumer.sh localhost:2199 2 localhost
+```
+
+Send messages again to the topic.
+
+```
+$HELIX_RABBITMQ_ROOT/bin/send-message.sh localhost 100
+```
+
+You should see that messages are now received by all 3 consumers.
+
+Stop a consumer
+---------------
+In any terminal press CTRL^C and notice that Helix detects the consumer failure and distributes the 2 partitions that were processed by failed consumer to the remaining 2 active consumers.
+
+
+How does it work
+================
+
+Find the entire code [here](https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=tree;f=recipes/rabbitmq-consumer-group/src/main/java/org/apache/helix/recipes/rabbitmq). 
+ 
+Cluster setup
+-------------
+This step creates znode on zookeeper for the cluster and adds the state model. We use online offline state model since there is no need for other states. The consumer is either processing a queue or it is not.
+
+It creates a resource called "rabbitmq-consumer-group" with 6 partitions. The execution mode is set to FULL_AUTO. This means that the Helix controls the assignment of partition to consumers and automatically distributes the partitions evenly among the active consumers. When a consumer is added or removed, it ensures that a minimum number of partitions are shuffled.
+
+```
+      zkclient = new ZkClient(zkAddr, ZkClient.DEFAULT_SESSION_TIMEOUT,
+          ZkClient.DEFAULT_CONNECTION_TIMEOUT, new ZNRecordSerializer());
+      ZKHelixAdmin admin = new ZKHelixAdmin(zkclient);
+      
+      // add cluster
+      admin.addCluster(clusterName, true);
+
+      // add state model definition
+      StateModelConfigGenerator generator = new StateModelConfigGenerator();
+      admin.addStateModelDef(clusterName, "OnlineOffline",
+          new StateModelDefinition(generator.generateConfigForOnlineOffline()));
+
+      // add resource "topic" which has 6 partitions
+      String resourceName = "rabbitmq-consumer-group";
+      admin.addResource(clusterName, resourceName, 6, "OnlineOffline", "FULL_AUTO");
+```
+
+Starting the consumers
+----------------------
+The only thing consumers need to know is the zkaddress, cluster name and consumer id. It does not need to know anything else.
+
+```
+   _manager =
+          HelixManagerFactory.getZKHelixManager(_clusterName,
+                                                _consumerId,
+                                                InstanceType.PARTICIPANT,
+                                                _zkAddr);
+
+      StateMachineEngine stateMach = _manager.getStateMachineEngine();
+      ConsumerStateModelFactory modelFactory =
+          new ConsumerStateModelFactory(_consumerId, _mqServer);
+      stateMach.registerStateModelFactory("OnlineOffline", modelFactory);
+
+      _manager.connect();
+
+```
+
+Once the consumer has registered the statemodel and the controller is started, the consumer starts getting callbacks (onBecomeOnlineFromOffline) for the partition it needs to host. All it needs to do as part of the callback is to start consuming messages from the appropriate queue. Similarly, when the controller deallocates a partitions from a consumer, it fires onBecomeOfflineFromOnline for the same partition. 
+As a part of this transition, the consumer will stop consuming from a that queue.
+
+```
+ @Transition(to = "ONLINE", from = "OFFLINE")
+  public void onBecomeOnlineFromOffline(Message message, NotificationContext context)
+  {
+    LOG.debug(_consumerId + " becomes ONLINE from OFFLINE for " + _partition);
+
+    if (_thread == null)
+    {
+      LOG.debug("Starting ConsumerThread for " + _partition + "...");
+      _thread = new ConsumerThread(_partition, _mqServer, _consumerId);
+      _thread.start();
+      LOG.debug("Starting ConsumerThread for " + _partition + " done");
+
+    }
+  }
+
+  @Transition(to = "OFFLINE", from = "ONLINE")
+  public void onBecomeOfflineFromOnline(Message message, NotificationContext context)
+      throws InterruptedException
+  {
+    LOG.debug(_consumerId + " becomes OFFLINE from ONLINE for " + _partition);
+
+    if (_thread != null)
+    {
+      LOG.debug("Stopping " + _consumerId + " for " + _partition + "...");
+
+      _thread.interrupt();
+      _thread.join(2000);
+      _thread = null;
+      LOG.debug("Stopping " +  _consumerId + " for " + _partition + " done");
+
+    }
+  }
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md b/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
new file mode 100644
index 0000000..f8a74a0
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/recipes/rsync_replicated_file_store.md
@@ -0,0 +1,165 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Near real time rsync replicated file system
+===========================================
+
+Quickdemo
+---------
+
+* This demo starts 3 instances with id's as ```localhost_12001, localhost_12002, localhost_12003```
+* Each instance stores its files under ```/tmp/<id>/filestore```
+* ``` localhost_12001 ``` is designated as the master and ``` localhost_12002 and localhost_12003``` are the slaves.
+* Files written to master are replicated to the slaves automatically. In this demo, a.txt and b.txt are written to ```/tmp/localhost_12001/filestore``` and it gets replicated to other folders.
+* When the master is stopped, ```localhost_12002``` is promoted to master. 
+* The other slave ```localhost_12003``` stops replicating from ```localhost_12001``` and starts replicating from new master ```localhost_12002```
+* Files written to new master ```localhost_12002``` are replicated to ```localhost_12003```
+* In the end state of this quick demo, ```localhost_12002``` is the master and ```localhost_12003``` is the slave. Manually create files under ```/tmp/localhost_12002/filestore``` and see that appears in ```/tmp/localhost_12003/filestore```
+* Ignore the interrupted exceptions on the console :-).
+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd recipes/rsync-replicated-file-system/
+mvn clean install package -DskipTests
+cd target/rsync-replicated-file-system-pkg/bin
+chmod +x *
+./quickdemo
+
+```
+
+Overview
+--------
+
+There are many applications that require storage for storing large number of relatively small data files. Examples include media stores to store small videos, images, mail attachments etc. Each of these objects is typically kilobytes, often no larger than a few megabytes. An additional distinguishing feature of these usecases is also that files are typically only added or deleted, rarely updated. When there are updates, they are rare and do not have any concurrency requirements.
+
+These are much simpler requirements than what general purpose distributed file system have to satisfy including concurrent access to files, random access for reads and updates, posix compliance etc. To satisfy those requirements, general DFSs are also pretty complex that are expensive to build and maintain.
+ 
+A different implementation of a distributed file system includes HDFS which is inspired by Google's GFS. This is one of the most widely used distributed file system that forms the main data storage platform for Hadoop. HDFS is primary aimed at processing very large data sets and distributes files across a cluster of commodity servers by splitting up files in fixed size chunks. HDFS is not particularly well suited for storing a very large number of relatively tiny files.
+
+### File Store
+
+It's possible to build a vastly simpler system for the class of applications that have simpler requirements as we have pointed out.
+
+* Large number of files but each file is relatively small.
+* Access is limited to create, delete and get entire files.
+* No updates to files that are already created (or it's feasible to delete the old file and create a new one).
+ 
+
+We call this system a Partitioned File Store (PFS) to distinguish it from other distributed file systems. This system needs to provide the following features:
+
+* CRD access to large number of small files
+* Scalability: Files should be distributed across a large number of commodity servers based on the storage requirement.
+* Fault-tolerance: Each file should be replicated on multiple servers so that individual server failures do not reduce availability.
+* Elasticity: It should be possible to add capacity to the cluster easily.
+ 
+
+Apache Helix is a generic cluster management framework that makes it very easy to provide the scalability, fault-tolerance and elasticity features. 
+Rsync can be easily used as a replication channel between servers so that each file gets replicated on multiple servers.
+
+Design
+------
+
+High level 
+
+* Partition the file system based on the file name. 
+* At any time a single writer can write, we call this a master.
+* For redundancy, we need to have additional replicas called slave. Slaves can optionally serve reads.
+* Slave replicates data from the master.
+* When a master fails, slave gets promoted to master.
+
+### Transaction log
+
+Every write on the master will result in creation/deletion of one or more files. In order to maintain timeline consistency slaves need to apply the changes in the same order. 
+To facilitate this, the master logs each transaction in a file and each transaction is associated with an 64 bit id in which the 32 LSB represents a sequence number and MSB represents the generation number.
+Sequence gets incremented on every transaction and and generation is increment when a new master is elected. 
+
+### Replication
+
+Replication is required to slave to keep up with the changes on the master. Every time the slave applies a change it checkpoints the last applied transaction id. 
+During restarts, this allows the slave to pull changes from the last checkpointed id. Similar to master, the slave logs each transaction to the transaction logs but instead of generating new transaction id, it uses the same id generated by the master.
+
+
+### Fail over
+
+When a master fails, a new slave will be promoted to master. If the prev master node is reachable, then the new master will flush all the 
+changes from previous master before taking up mastership. The new master will record the end transaction id of the current generation and then starts new generation 
+with sequence starting from 1. After this the master will begin accepting writes. 
+
+
+![Partitioned File Store](../images/PFS-Generic.png)
+
+
+
+Rsync based solution
+-------------------
+
+![Rsync based File Store](../images/RSYNC_BASED_PFS.png)
+
+
+This application demonstrate a file store that uses rsync as the replication mechanism. One can envision a similar system where instead of using rsync, 
+can implement a custom solution to notify the slave of the changes and also provide an api to pull the change files.
+#### Concept
+* file_store_dir: Root directory for the actual data files 
+* change_log_dir: The transaction logs are generated under this folder.
+* check_point_dir: The slave stores the check points ( last processed transaction) here.
+
+#### Master
+* File server: This component support file uploads and downloads and writes the files to ```file_store_dir```. This is not included in this application. Idea is that most applications have different ways of implementing this component and has some business logic associated with it. It is not hard to come up with such a component if needed.
+* File store watcher: This component watches the ```file_store_dir``` directory on the local file system for any changes and notifies the registered listeners of the changes.
+* Change Log Generator: This registers as a listener of File System Watcher and on each notification logs the changes into a file under ```change_log_dir```. 
+
+####Slave
+* File server: This component on the slave will only support reads.
+* Cluster state observer: Slave observes the cluster state and is able to know who is the current master. 
+* Replicator: This has two subcomponents
+    - Periodic rsync of change log: This is a background process that periodically rsyncs the ```change_log_dir``` of the master to its local directory
+    - Change Log Watcher: This watches the ```change_log_dir``` for changes and notifies the registered listeners of the change
+    - On demand rsync invoker: This is registered as a listener to change log watcher and on every change invokes rsync to sync only the changed file.
+
+
+#### Coordination
+
+The coordination between nodes is done by Helix. Helix does the partition management and assigns the partition to multiple nodes based on the replication factor. It elects one the nodes as master and designates others as slaves.
+It provides notifications to each node in the form of state transitions ( Offline to Slave, Slave to Master). It also provides notification when there is change is cluster state. 
+This allows the slave to stop replicating from current master and start replicating from new master. 
+
+In this application, we have only one partition but its very easy to extend it to support multiple partitions. By partitioning the file store, one can add new nodes and Helix will automatically 
+re-distribute partitions among the nodes. To summarize, Helix provides partition management, fault tolerance and facilitates automated cluster expansion.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/service_discovery.md b/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
new file mode 100644
index 0000000..8e06ead
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/recipes/service_discovery.md
@@ -0,0 +1,191 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Service Discovery
+-----------------
+
+One of the common usage of zookeeper is enable service discovery. 
+The basic idea is that when a server starts up it advertises its configuration/metadata such as host name port etc on zookeeper. 
+This allows clients to dynamically discover the servers that are currently active. One can think of this like a service registry to which a server registers when it starts and 
+is automatically deregistered when it shutdowns or crashes. In many cases it serves as an alternative to vips.
+
+The core idea behind this is to use zookeeper ephemeral nodes. The ephemeral nodes are created when the server registers and all its metadata is put into a znode. 
+When the server shutdowns, zookeeper automatically removes this znode. 
+
+There are two ways the clients can dynamically discover the active servers
+
+#### ZOOKEEPER WATCH
+
+Clients can set a child watch under specific path on zookeeper. 
+When a new service is registered/deregistered, zookeeper notifies the client via watchevent and the client can read the list of services. Even though this looks trivial, 
+there are lot of things one needs to keep in mind like ensuring that you first set the watch back on zookeeper before reading data from zookeeper.
+
+
+#### POLL
+
+Another approach is for the client to periodically read the zookeeper path and get the list of services.
+
+
+Both approaches have pros and cons, for example setting a watch might trigger herd effect if there are large number of clients. This is worst especially when servers are starting up. 
+But good thing about setting watch is that clients are immediately notified of a change which is not true in case of polling. 
+In some cases, having both WATCH and POLL makes sense, WATCH allows one to get notifications as soon as possible while POLL provides a safety net if a watch event is missed because of code bug or zookeeper fails to notify.
+
+##### Other important scenarios to take care of
+* What happens when zookeeper session expires. All the watches/ephemeral nodes previously added/created by this server are lost. 
+One needs to add the watches again , recreate the ephemeral nodes etc.
+* Due to network issues or java GC pauses session expiry might happen again and again also known as flapping. Its important for the server to detect this and deregister itself.
+
+##### Other operational things to consider
+* What if the node is behaving badly, one might kill the server but will lose the ability to debug. 
+It would be nice to have the ability to mark a server as disabled and clients know that a node is disabled and will not contact that node.
+ 
+#### Configuration ownership
+
+This is an important aspect that is often ignored in the initial stages of your development. In common, service discovery pattern means that servers start up with some configuration and then simply puts its configuration/metadata in zookeeper. While this works well in the beginning, 
+configuration management becomes very difficult since the servers themselves are statically configured. Any change in server configuration implies restarting of the server. Ideally, it will be nice to have the ability to change configuration dynamically without having to restart a server. 
+
+Ideally you want a hybrid solution, a node starts with minimal configuration and gets the rest of configuration from zookeeper.
+
+h3. How to use Helix to achieve this
+
+Even though Helix has higher level abstraction in terms of statemachine, constraints and objectives, 
+service discovery is one of things that existed since we started. 
+The controller uses the exact mechanism we described above to discover when new servers join the cluster.
+We create these znodes under /CLUSTERNAME/LIVEINSTANCES. 
+Since at any time there is only one controller, we use ZK watch to track the liveness of a server.
+
+This recipe, simply demonstrate how one can re-use that part for implementing service discovery. This demonstrates multiple MODE's of service discovery
+
+* POLL: The client reads from zookeeper at regular intervals 30 seconds. Use this if you have 100's of clients
+* WATCH: The client sets up watcher and gets notified of the changes. Use this if you have 10's of clients.
+* NONE: This does neither of the above, but reads directly from zookeeper when ever needed.
+
+Helix provides these additional features compared to other implementations available else where
+
+* It has the concept of disabling a node which means that a badly behaving node, can be disabled using helix admin api.
+* It automatically detects if a node connects/disconnects from zookeeper repeatedly and disables the node.
+* Configuration management  
+    * Allows one to set configuration via admin api at various granulaties like cluster, instance, resource, partition 
+    * Configuration can be dynamically changed.
+    * Notifies the server when configuration changes.
+
+
+##### checkout and build
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/service-discovery/target/service-discovery-pkg/bin
+chmod +x *
+```
+
+##### start zookeeper
+
+```
+./start-standalone-zookeeper 2199
+```
+
+#### Run the demo
+
+```
+./service-discovery-demo.sh
+```
+
+#### Output
+
+```
+START:Service discovery demo mode:WATCH
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12002
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12002
+END:Service discovery demo mode:WATCH
+=============================================
+START:Service discovery demo mode:POLL
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12002
+	Sleeping for poll interval:30000
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12002
+END:Service discovery demo mode:POLL
+=============================================
+START:Service discovery demo mode:NONE
+	Registering service
+		host.x.y.z_12000
+		host.x.y.z_12001
+		host.x.y.z_12002
+		host.x.y.z_12003
+		host.x.y.z_12004
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12000
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Deregistering service:
+		host.x.y.z_12000
+	SERVICES AVAILABLE
+		SERVICENAME 	HOST 			PORT
+		myServiceName 	host.x.y.z 		12001
+		myServiceName 	host.x.y.z 		12002
+		myServiceName 	host.x.y.z 		12003
+		myServiceName 	host.x.y.z 		12004
+	Registering service:host.x.y.z_12000
+END:Service discovery demo mode:NONE
+=============================================
+
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md b/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
new file mode 100644
index 0000000..f0474e4
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/recipes/task_dag_execution.md
@@ -0,0 +1,204 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Distributed task execution
+
+
+This recipe is intended to demonstrate how task dependencies can be modeled using primitives provided by Helix. A given task can be run with desired parallelism and will start only when up-stream dependencies are met. The demo executes the task DAG described below using 10 workers. Although the demo starts the workers as threads, there is no requirement that all the workers need to run in the same process. In reality, these workers run on many different boxes on a cluster.  When worker fails, Helix takes care of 
+re-assigning a failed task partition to a new worker. 
+
+Redis is used as a result store. Any other suitable implementation for TaskResultStore can be plugged in.
+
+### Workflow 
+
+
+#### Input 
+
+10000 impression events and around 100 click events are pre-populated in task result store (redis). 
+
+* **ImpEvent**: format: id,isFraudulent,country,gender
+
+* **ClickEvent**: format: id,isFraudulent,impEventId
+
+#### Stages
+
++ **FilterImps**: Filters impression where isFraudulent=true.
+
++ **FilterClicks**: Filters clicks where isFraudulent=true
+
++ **impCountsByGender**: Generates impression counts grouped by gender. It does this by incrementing the count for 'impression_gender_counts:<gender_value>' in the task result store (redis hash). Depends on: **FilterImps**
+
++ **impCountsByCountry**: Generates impression counts grouped by country. It does this by incrementing the count for 'impression_country_counts:<country_value>' in the task result store (redis hash). Depends on: **FilterClicks**
+
++ **impClickJoin**: Joins clicks with corresponding impression event using impEventId as the join key. Join is needed to pull dimensions not present in click event. Depends on: **FilterImps, FilterClicks**
+
++ **clickCountsByGender**: Generates click counts grouped by gender. It does this by incrementing the count for click_gender_counts:<gender_value> in the task result store (redis hash). Depends on: **impClickJoin**
+
++ **clickCountsByGender**: Generates click counts grouped by country. It does this by incrementing the count for click_country_counts:<country_value> in the task result store (redis hash). Depends on: **impClickJoin**
+
++ **report**: Reads from all aggregates generated by previous stages and prints them. Depends on: **impCountsByGender, impCountsByCountry, clickCountsByGender,clickCountsByGender**
+
+
+### Creating DAG
+
+Each stage is represented as a Node along with the upstream dependency and desired parallelism.  Each stage is modelled as a resource in Helix using OnlineOffline state model. As part of Offline to Online transition, we watch the external view of upstream resources and wait for them to transition to online state. See Task.java for additional info.
+
+```
+
+  Dag dag = new Dag();
+  dag.addNode(new Node("filterImps", 10, ""));
+  dag.addNode(new Node("filterClicks", 5, ""));
+  dag.addNode(new Node("impClickJoin", 10, "filterImps,filterClicks"));
+  dag.addNode(new Node("impCountsByGender", 10, "filterImps"));
+  dag.addNode(new Node("impCountsByCountry", 10, "filterImps"));
+  dag.addNode(new Node("clickCountsByGender", 5, "impClickJoin"));
+  dag.addNode(new Node("clickCountsByCountry", 5, "impClickJoin"));		
+  dag.addNode(new Node("report",1,"impCountsByGender,impCountsByCountry,clickCountsByGender,clickCountsByCountry"));
+
+
+```
+
+### DEMO
+
+In order to run the demo, use the following steps
+
+See http://redis.io/topics/quickstart on how to install redis server
+
+```
+
+Start redis e.g:
+./redis-server --port 6379
+
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd recipes/task-execution
+mvn clean install package -DskipTests
+cd target/task-execution-pkg/bin
+chmod +x task-execution-demo.sh
+./task-execution-demo.sh 2181 localhost 6379 
+
+```
+
+```
+
+
+
+
+
+                       +-----------------+       +----------------+
+                       |   filterImps    |       |  filterClicks  |
+                       | (parallelism=10)|       | (parallelism=5)|
+                       +----------+-----++       +-------+--------+
+                       |          |     |                |
+                       |          |     |                |
+                       |          |     |                |
+                       |          |     +------->--------v------------+
+      +--------------<-+   +------v-------+    |  impClickJoin        |
+      |impCountsByGender   |impCountsByCountry | (parallelism=10)     |
+      |(parallelism=10)    |(parallelism=10)   ++-------------------+-+
+      +-----------+--+     +---+----------+     |                   |
+                  |            |                |                   |
+                  |            |                |                   |
+                  |            |       +--------v---------+       +-v-------------------+
+                  |            |       |clickCountsByGender       |clickCountsByCountry |
+                  |            |       |(parallelism=5)   |       |(parallelism=5)      |
+                  |            |       +----+-------------+       +---------------------+
+                  |            |            |                     |
+                  |            |            |                     |
+                  |            |            |                     |
+                  +----->+-----+>-----------v----+<---------------+
+                         | report                |
+                         |(parallelism=1)        |
+                         +-----------------------+
+
+```
+
+(credit for above ascii art: http://www.asciiflow.com)
+
+### OUTPUT
+
+```
+Done populating dummy data
+Executing filter task for filterImps_3 for impressions_demo
+Executing filter task for filterImps_2 for impressions_demo
+Executing filter task for filterImps_0 for impressions_demo
+Executing filter task for filterImps_1 for impressions_demo
+Executing filter task for filterImps_4 for impressions_demo
+Executing filter task for filterClicks_3 for clicks_demo
+Executing filter task for filterClicks_1 for clicks_demo
+Executing filter task for filterImps_8 for impressions_demo
+Executing filter task for filterImps_6 for impressions_demo
+Executing filter task for filterClicks_2 for clicks_demo
+Executing filter task for filterClicks_0 for clicks_demo
+Executing filter task for filterImps_7 for impressions_demo
+Executing filter task for filterImps_5 for impressions_demo
+Executing filter task for filterClicks_4 for clicks_demo
+Executing filter task for filterImps_9 for impressions_demo
+Running AggTask for impCountsByGender_3 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_2 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_0 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_9 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_1 for filtered_impressions_demo gender
+Running AggTask for impCountsByGender_4 for filtered_impressions_demo gender
+Running AggTask for impCountsByCountry_4 for filtered_impressions_demo country
+Running AggTask for impCountsByGender_5 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_2
+Running AggTask for impCountsByCountry_3 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_1 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_0 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_2 for filtered_impressions_demo country
+Running AggTask for impCountsByGender_6 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_1
+Executing JoinTask for impClickJoin_0
+Executing JoinTask for impClickJoin_3
+Running AggTask for impCountsByGender_8 for filtered_impressions_demo gender
+Executing JoinTask for impClickJoin_4
+Running AggTask for impCountsByGender_7 for filtered_impressions_demo gender
+Running AggTask for impCountsByCountry_5 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_6 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_9
+Running AggTask for impCountsByCountry_8 for filtered_impressions_demo country
+Running AggTask for impCountsByCountry_7 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_5
+Executing JoinTask for impClickJoin_6
+Running AggTask for impCountsByCountry_9 for filtered_impressions_demo country
+Executing JoinTask for impClickJoin_8
+Executing JoinTask for impClickJoin_7
+Running AggTask for clickCountsByCountry_1 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_0 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_2 for joined_clicks_demo country
+Running AggTask for clickCountsByCountry_3 for joined_clicks_demo country
+Running AggTask for clickCountsByGender_1 for joined_clicks_demo gender
+Running AggTask for clickCountsByCountry_4 for joined_clicks_demo country
+Running AggTask for clickCountsByGender_3 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_2 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_4 for joined_clicks_demo gender
+Running AggTask for clickCountsByGender_0 for joined_clicks_demo gender
+Running reports task
+Impression counts per country
+{CANADA=1940, US=1958, CHINA=2014, UNKNOWN=2022, UK=1946}
+Click counts per country
+{US=24, CANADA=14, CHINA=26, UNKNOWN=14, UK=22}
+Impression counts per gender
+{F=3325, UNKNOWN=3259, M=3296}
+Click counts per gender
+{F=33, UNKNOWN=32, M=35}
+
+
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md b/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
new file mode 100644
index 0000000..68fd954
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/recipes/user_def_rebalancer.md
@@ -0,0 +1,285 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+Lock Manager with a User-Defined Rebalancer
+-------------------------------------------
+Helix is able to compute node preferences and state assignments automatically using general-purpose algorithms. In many cases, a distributed system implementer may choose to instead define a customized approach to computing the location of replicas, the state mapping, or both in response to the addition or removal of participants. The following is an implementation of the [Distributed Lock Manager](./lock_manager.html) that includes a user-defined rebalancer.
+
+### Define the cluster and locks
+
+The YAML file below fully defines the cluster and the locks. A lock can be in one of two states: locked and unlocked. Transitions can happen in either direction, and the locked is preferred. A resource in this example is the entire collection of locks to distribute. A partition is mapped to a lock; in this case that means there are 12 locks. These 12 locks will be distributed across 3 nodes. The constraints indicate that only one replica of a lock can be in the locked state at any given time. These locks can each only have a single holder, defined by a replica count of 1.
+
+Notice the rebalancer section of the definition. The mode is set to USER_DEFINED and the class name refers to the plugged-in rebalancer implementation that inherits from [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). This implementation is called whenever the state of the cluster changes, as is the case when participants are added or removed from the system.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/resources/lock-manager-config.yaml
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster
+resources:
+  - name: lock-group # unique resource name
+    rebalancer: # we will provide our own rebalancer
+      mode: USER_DEFINED
+      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer
+    partitions:
+      count: 12 # number of locks
+      replicas: 1 # number of simultaneous holders for each lock
+    stateModel:
+      name: lock-unlock # unique model name
+      states: [LOCKED, RELEASED, DROPPED] # the list of possible states
+      transitions: # the list of possible transitions
+        - name: Unlock
+          from: LOCKED
+          to: RELEASED
+        - name: Lock
+          from: RELEASED
+          to: LOCKED
+        - name: DropLock
+          from: LOCKED
+          to: DROPPED
+        - name: DropUnlock
+          from: RELEASED
+          to: DROPPED
+        - name: Undrop
+          from: DROPPED
+          to: RELEASED
+      initialState: RELEASED
+    constraints:
+      state:
+        counts: # maximum number of replicas of a partition that can be in each state
+          - name: LOCKED
+            count: "1"
+          - name: RELEASED
+            count: "-1"
+          - name: DROPPED
+            count: "-1"
+        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority
+      transition: # transitions priority to enforce order that transitions occur
+        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock]
+participants: # list of nodes that can acquire locks
+  - name: localhost_12001
+    host: localhost
+    port: 12001
+  - name: localhost_12002
+    host: localhost
+    port: 12002
+  - name: localhost_12003
+    host: localhost
+    port: 12003
+```
+
+Then, Helix\'s YAMLClusterSetup tool can read in the configuration and bootstrap the cluster immediately:
+
+```
+YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
+InputStream input =
+    Thread.currentThread().getContextClassLoader()
+        .getResourceAsStream("lock-manager-config.yaml");
+YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
+```
+
+### Write a rebalancer
+Below is a full implementation of a rebalancer that extends [HelixRebalancer](http://helix.incubator.apache.org/javadocs/0.7.0-incubating/reference/org/apache/helix/controller/rebalancer/HelixRebalancer.html). In this case, it simply throws out the previous resource assignment, computes the target node for as many partition replicas as can hold a lock in the LOCKED state (in this example, one), and assigns them the LOCKED state (which is at the head of the state preference list). Clearly a more robust implementation would likely examine the current ideal state to maintain current assignments, and the full state list to handle models more complicated than this one. However, for a simple lock holder implementation, this is sufficient.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockManagerRebalancer.java
+
+```
+@Override
+public ResourceAssignment computeResourceMapping(RebalancerConfig rebalancerConfig, Cluster cluster,
+    ResourceCurrentState currentState) {
+  // Get the rebalcancer context (a basic partitioned one)
+  PartitionedRebalancerContext context = rebalancerConfig.getRebalancerContext(
+      PartitionedRebalancerContext.class);
+
+  // Initialize an empty mapping of locks to participants
+  ResourceAssignment assignment = new ResourceAssignment(context.getResourceId());
+
+  // Get the list of live participants in the cluster
+  List<ParticipantId> liveParticipants = new ArrayList<ParticipantId>(
+      cluster.getLiveParticipantMap().keySet());
+
+  // Get the state model (should be a simple lock/unlock model) and the highest-priority state
+  StateModelDefId stateModelDefId = context.getStateModelDefId();
+  StateModelDefinition stateModelDef = cluster.getStateModelMap().get(stateModelDefId);
+  if (stateModelDef.getStatesPriorityList().size() < 1) {
+    LOG.error("Invalid state model definition. There should be at least one state.");
+    return assignment;
+  }
+  State lockState = stateModelDef.getTypedStatesPriorityList().get(0);
+
+  // Count the number of participants allowed to lock each lock
+  String stateCount = stateModelDef.getNumParticipantsPerState(lockState);
+  int lockHolders = 0;
+  try {
+    // a numeric value is a custom-specified number of participants allowed to lock the lock
+    lockHolders = Integer.parseInt(stateCount);
+  } catch (NumberFormatException e) {
+    LOG.error("Invalid state model definition. The lock state does not have a valid count");
+    return assignment;
+  }
+
+  // Fairly assign the lock state to the participants using a simple mod-based sequential
+  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
+  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
+  // number of participants as necessary.
+  // This assumes a simple lock-unlock model where the only state of interest is which nodes have
+  // acquired each lock.
+  int i = 0;
+  for (PartitionId partition : context.getPartitionSet()) {
+    Map<ParticipantId, State> replicaMap = new HashMap<ParticipantId, State>();
+    for (int j = i; j < i + lockHolders; j++) {
+      int participantIndex = j % liveParticipants.size();
+      ParticipantId participant = liveParticipants.get(participantIndex);
+      // enforce that a participant can only have one instance of a given lock
+      if (!replicaMap.containsKey(participant)) {
+        replicaMap.put(participant, lockState);
+      }
+    }
+    assignment.addReplicaMap(partition, replicaMap);
+    i++;
+  }
+  return assignment;
+}
+```
+
+### Start up the participants
+Here is a lock class based on the newly defined lock-unlock state model so that the participant can receive callbacks on state transitions.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/Lock.java
+
+```
+public class Lock extends StateModel {
+  private String lockName;
+
+  public Lock(String lockName) {
+    this.lockName = lockName;
+  }
+
+  @Transition(from = "RELEASED", to = "LOCKED")
+  public void lock(Message m, NotificationContext context) {
+    System.out.println(context.getManager().getInstanceName() + " acquired lock:" + lockName);
+  }
+
+  @Transition(from = "LOCKED", to = "RELEASED")
+  public void release(Message m, NotificationContext context) {
+    System.out.println(context.getManager().getInstanceName() + " releasing lock:" + lockName);
+  }
+}
+```
+
+Here is the factory to make the Lock class accessible.
+
+Location: incubator-helix/recipes/user-rebalanced-lock-manager/src/main/java/org/apache/helix/userdefinedrebalancer/LockFactory.java
+
+```
+public class LockFactory extends StateModelFactory<Lock> {
+  @Override
+  public Lock createNewStateModel(String lockName) {
+    return new Lock(lockName);
+  }
+}
+```
+
+Finally, here is the factory registration and the start of the participant:
+
+```
+participantManager =
+    HelixManagerFactory.getZKHelixManager(clusterName, participantName, InstanceType.PARTICIPANT,
+        zkAddress);
+participantManager.getStateMachineEngine().registerStateModelFactory(stateModelName,
+    new LockFactory());
+participantManager.connect();
+```
+
+### Start up the controller
+
+```
+controllerManager =
+    HelixControllerMain.startHelixController(zkAddress, config.clusterName, "controller",
+        HelixControllerMain.STANDALONE);
+```
+
+### Try it out
+#### Building 
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn clean install package -DskipTests
+cd recipes/user-rebalanced-lock-manager/target/user-rebalanced-lock-manager-pkg/bin
+chmod +x *
+./lock-manager-demo.sh
+```
+
+#### Output
+
+```
+./lock-manager-demo 
+STARTING localhost_12002
+STARTING localhost_12001
+STARTING localhost_12003
+STARTED localhost_12001
+STARTED localhost_12003
+STARTED localhost_12002
+localhost_12003 acquired lock:lock-group_4
+localhost_12002 acquired lock:lock-group_8
+localhost_12001 acquired lock:lock-group_10
+localhost_12001 acquired lock:lock-group_3
+localhost_12001 acquired lock:lock-group_6
+localhost_12003 acquired lock:lock-group_0
+localhost_12002 acquired lock:lock-group_5
+localhost_12001 acquired lock:lock-group_9
+localhost_12002 acquired lock:lock-group_2
+localhost_12003 acquired lock:lock-group_7
+localhost_12003 acquired lock:lock-group_11
+localhost_12002 acquired lock:lock-group_1
+lockName  acquired By
+======================================
+lock-group_0  localhost_12003
+lock-group_1  localhost_12002
+lock-group_10 localhost_12001
+lock-group_11 localhost_12003
+lock-group_2  localhost_12002
+lock-group_3  localhost_12001
+lock-group_4  localhost_12003
+lock-group_5  localhost_12002
+lock-group_6  localhost_12001
+lock-group_7  localhost_12003
+lock-group_8  localhost_12002
+lock-group_9  localhost_12001
+Stopping the first participant
+localhost_12001 Interrupted
+localhost_12002 acquired lock:lock-group_3
+localhost_12003 acquired lock:lock-group_6
+localhost_12003 acquired lock:lock-group_10
+localhost_12002 acquired lock:lock-group_9
+lockName  acquired By
+======================================
+lock-group_0  localhost_12003
+lock-group_1  localhost_12002
+lock-group_10 localhost_12003
+lock-group_11 localhost_12003
+lock-group_2  localhost_12002
+lock-group_3  localhost_12002
+lock-group_4  localhost_12003
+lock-group_5  localhost_12002
+lock-group_6  localhost_12003
+lock-group_7  localhost_12003
+lock-group_8  localhost_12002
+lock-group_9  localhost_12002
+```
+
+Notice that the lock assignment directly follows the assignment generated by the user-defined rebalancer both initially and after a participant is removed from the system.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/tutorial_accessors.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/tutorial_accessors.md b/site-releases/trunk/src/site/markdown/tutorial_accessors.md
new file mode 100644
index 0000000..bde50d2
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/tutorial_accessors.md
@@ -0,0 +1,125 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Logical Accessors</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Logical Accessors
+
+Helix constructs follow a logical hierarchy. A cluster contains participants, and serve logical resources. Each resource can be divided into partitions, which themselves can be replicated. Helix now supports configuring and modifying clusters programmatically in a hierarchical way using logical accessors.
+
+[Click here](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/api/accessor/package-summary.html) for the Javadocs of the accessors.
+
+### An Example
+
+#### Configure a Participant
+
+A participant is a combination of a host, port, and a UserConfig. A UserConfig is an arbitrary set of properties a Helix user can attach to any participant.
+
+```
+ParticipantId participantId = ParticipantId.from("localhost_12345");
+ParticipantConfig participantConfig = new ParticipantConfig.Builder(participantId)
+    .hostName("localhost").port(12345).build();
+```
+
+#### Configure a Resource
+
+##### RebalancerContext
+A Resource is essentially a combination of a RebalancerContext and a UserConfig. A [RebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/RebalancerContext.html) consists of all the key properties required to rebalance a resource, including how it is partitioned and replicated, and what state model it follows. Most Helix resources will make use of a [PartitionedRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/PartitionedRebalancerContext.html), which is a RebalancerContext for resources that are partitioned.
+
+Recall that there are four [rebalancing modes](./tutorial_rebalance.html) that Helix provides, and so Helix also provides the following subclasses for PartitionedRebalancerContext:
+
+* [FullAutoRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/FullAutoRebalancerContext.html) for FULL_AUTO mode.
+* [SemiAutoRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/SemiAutoRebalancerContext.html) for SEMI_AUTO mode. This class allows a user to specify "preference lists" to indicate where each partition should ideally be served
+* [CustomRebalancerContext](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/controller/rebalancer/context/CustomRebalancerContext.html) for CUSTOMIZED mode. This class allows a user tp specify "preference maps" to indicate the location and state for each partition replica.
+
+Helix also supports arbitrary subclasses of PartitionedRebalancerContext and even arbitrary implementations of RebalancerContext for applications that need a user-defined approach for rebalancing. For more, see [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
+
+##### In Action
+
+Here is an example of a configured resource with a rebalancer context for FULL_AUTO mode and two partitions:
+
+```
+ResourceId resourceId = ResourceId.from("sampleResource");
+StateModelDefinition stateModelDef = getStateModelDef();
+Partition partition1 = new Partition(PartitionId.from(resourceId, "1"));
+Partition partition2 = new Partition(PartitionId.from(resourceId, "2"));
+FullAutoRebalancerContext rebalanceContext =
+    new FullAutoRebalancerContext.Builder(resourceId).replicaCount(1).addPartition(partition1)
+        .addPartition(partition2).stateModelDefId(stateModelDef.getStateModelDefId()).build();
+ResourceConfig resourceConfig =
+    new ResourceConfig.Builder(resourceId).rebalancerContext(rebalanceContext).build();
+```
+
+#### Add the Cluster
+
+Now we can take the participant and resource configured above, add them to a cluster configuration, and then persist the entire cluster at once using a ClusterAccessor:
+
+```
+// configure the cluster
+ClusterId clusterId = ClusterId.from("sampleCluster");
+ClusterConfig clusterConfig = new ClusterConfig.Builder(clusterId).addParticipant(participantConfig)
+    .addResource(resourceConfig).addStateModelDefinition(stateModelDef).build();
+
+// create the cluster using a ClusterAccessor
+HelixConnection connection = new ZkHelixConnection(zkAddr);
+connection.connect();
+ClusterAccessor clusterAccessor = connection.createClusterAccessor(clusterId);
+clusterAccessor.createCluster(clusterConfig);
+```
+
+### Create, Read, Update, and Delete
+
+Note that you don't have to specify the entire cluster beforehand! Helix provides a ClusterAccessor, ParticipantAccessor, and ResourceAccessor to allow changing as much or as little of the cluster as needed on the fly. You can add a resource or participant to a cluster, reconfigure a resource, participant, or cluster, remove components from the cluster, and more. See the [Javadocs](http://helix.incubator.apache.org/apidocs/reference/org/apache/helix/api/accessor/package-summary.html) to see all that the accessor classes can do.
+
+#### Delta Classes
+
+Updating a cluster, participant, or resource should involve selecting the element to change, and then letting Helix change only that component. To do this, Helix has included Delta classes for ClusterConfig, ParticipantConfig, and ResourceConfig.
+
+#### Example: Updating a Participant
+
+Tags are used for Helix depolyments where only certain participants can be allowed to serve certain resources. To do this, Helix only assigns resource replicas to participants who have a tag that the resource specifies. In this example, we will use ParticipantConfig.Delta to remove a participant tag and add another as part of a reconfiguration.
+
+```
+// specify the change to the participant
+ParticipantConfig.Delta delta = new ParticipantConfig.Delta(participantId).addTag("newTag").removeTag("oldTag");
+
+// update the participant configuration
+ParticipantAccessor participantAccessor = connection.createParticipantAccessor(clusterId);
+participantAccessor.updateParticipant(participantId, delta);
+```
+
+#### Example: Dropping a Resource
+Removing a resource from the cluster is quite simple:
+
+```
+clusterAccessor.dropResourceFromCluster(resourceId);
+```
+
+#### Example: Reading the Cluster
+Reading a full snapshot of the cluster is also a one-liner:
+
+```
+Cluster cluster = clusterAccessor.readCluster();
+```
+
+### Atomic Accessors
+
+Helix also includes versions of ClusterAccessor, ParticipantAccessor, and ResourceAccessor that can complete operations atomically relative to one another. The specific semantics of the atomic operations are included in the Javadocs. These atomic classes should be used sparingly and only in cases where contention can adversely affect the correctness of a Helix-based cluster. For most deployments, this is not the case, and using these classes will cause a degradation in performance. However, the interface for all atomic accessors mirrors that of the non-atomic accessors.
\ No newline at end of file


[06/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/statemachine.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/statemachine.png b/site-releases/0.7.0-incubating/src/site/resources/images/statemachine.png
new file mode 100644
index 0000000..43d27ec
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/statemachine.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/resources/images/system.png
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/resources/images/system.png b/site-releases/0.7.0-incubating/src/site/resources/images/system.png
new file mode 100644
index 0000000..f8a05c8
Binary files /dev/null and b/site-releases/0.7.0-incubating/src/site/resources/images/system.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/site.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/site.xml b/site-releases/0.7.0-incubating/src/site/site.xml
new file mode 100644
index 0000000..babbe1c
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/site.xml
@@ -0,0 +1,120 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project name="Apache Helix">
+  <bannerLeft>
+    <src>images/helix-logo.jpg</src>
+    <href>http://helix.incubator.apache.org/site-releases/0.7.0-incubating-site</href>
+  </bannerLeft>
+  <bannerRight>
+    <src>http://incubator.apache.org/images/egg-logo.png</src>
+    <href>http://incubator.apache.org/</href>
+  </bannerRight>
+  <version position="none"/>
+
+  <publishDate position="right"/>
+
+  <skin>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-fluido-skin</artifactId>
+    <version>1.3.0</version>
+  </skin>
+
+  <body>
+
+    <head>
+      <script type="text/javascript">
+
+        var _gaq = _gaq || [];
+        _gaq.push(['_setAccount', 'UA-3211522-12']);
+        _gaq.push(['_trackPageview']);
+
+        (function() {
+        var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+        ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+        var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+        })();
+
+      </script>
+
+    </head>
+
+    <breadcrumbs position="left">
+      <item name="Apache Helix" href="http://helix.incubator.apache.org/"/>
+      <item name="Release 0.7.0-incubating" href="http://helix.incubator.apache.org/site-releases/0.7.0-incubating-site/"/>
+    </breadcrumbs>
+
+    <menu name="Apache Helix">
+      <item name="Home" href="../../index.html"/>
+    </menu>
+
+    <menu name="Helix 0.7.0-incubating">
+      <item name="Introduction" href="./index.html"/>
+      <item name="Getting Helix" href="./Building.html"/>
+      <item name="Core concepts" href="./Concepts.html"/>
+      <item name="Architecture" href="./Architecture.html"/>
+      <item name="Quick Start" href="./Quickstart.html"/>
+      <item name="Tutorial" href="./Tutorial.html"/>
+      <item name="Release Notes" href="releasenotes/release-0.7.0-incubating.html"/>
+      <item name="Download" href="./download.html"/>
+    </menu>
+
+    <menu name="Recipes">
+      <item name="Distributed lock manager" href="./recipes/lock_manager.html"/>
+      <item name="Rabbit MQ consumer group" href="./recipes/rabbitmq_consumer_group.html"/>
+      <item name="Rsync replicated file store" href="./recipes/rsync_replicated_file_store.html"/>
+      <item name="Service Discovery" href="./recipes/service_discovery.html"/>
+      <item name="Distributed task DAG Execution" href="./recipes/task_dag_execution.html"/>
+      <item name="User-defined rebalancer" href="./recipes/user_def_rebalancer.html"/>
+    </menu>
+<!--
+    <menu ref="reports" inherit="bottom"/>
+    <menu ref="modules" inherit="bottom"/>
+
+
+    <menu name="ASF">
+      <item name="How Apache Works" href="http://www.apache.org/foundation/how-it-works.html"/>
+      <item name="Foundation" href="http://www.apache.org/foundation/"/>
+      <item name="Sponsoring Apache" href="http://www.apache.org/foundation/sponsorship.html"/>
+      <item name="Thanks" href="http://www.apache.org/foundation/thanks.html"/>
+    </menu>
+-->
+    <footer>
+      <div class="row span16"><div>Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation.
+        All other marks mentioned may be trademarks or registered trademarks of their respective owners.</div>
+        <a href="${project.url}/privacy-policy.html">Privacy Policy</a>
+      </div>
+    </footer>
+
+
+  </body>
+
+  <custom>
+    <fluidoSkin>
+      <topBarEnabled>true</topBarEnabled>
+      <!-- twitter link work only with sidebar disabled -->
+      <sideBarEnabled>true</sideBarEnabled>
+      <googleSearch></googleSearch>
+      <twitter>
+        <user>ApacheHelix</user>
+        <showUser>true</showUser>
+        <showFollowers>false</showFollowers>
+      </twitter>
+    </fluidoSkin>
+  </custom>
+
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/site/xdoc/download.xml.vm
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/site/xdoc/download.xml.vm b/site-releases/0.7.0-incubating/src/site/xdoc/download.xml.vm
new file mode 100644
index 0000000..41355db
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/site/xdoc/download.xml.vm
@@ -0,0 +1,193 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+-->
+#set( $releaseName = "0.7.0-incubating" )
+#set( $releaseDate = "10/31/2013" )
+<document xmlns="http://maven.apache.org/XDOC/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+
+  <properties>
+    <title>Apache Incubator Helix Downloads</title>
+    <author email="dev@helix.incubator.apache.org">Apache Helix Documentation Team</author>
+  </properties>
+
+  <body>
+    <div class="toc_container">
+      <macro name="toc">
+        <param name="class" value="toc"/>
+      </macro>
+    </div>
+    
+    <section name="Introduction">
+      <p>Apache Helix artifacts are distributed in source and binary form under the terms of the
+        <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>.
+        See the included <tt>LICENSE</tt> and <tt>NOTICE</tt> files included in each artifact for additional license 
+        information.
+      </p>
+      <p>Use the links below to download a source distribution of Apache Helix.
+      It is good practice to <a href="#Verifying_Releases">verify the integrity</a> of the distribution files.</p>
+    </section>
+
+    <section name="Release">
+      <p>Release date: ${releaseDate} </p>
+      <p><a href="releasenotes/release-${releaseName}.html">${releaseName} Release notes</a></p>
+      <a name="mirror"/>
+      <subsection name="Mirror">
+
+        <p>
+          [if-any logo]
+          <a href="[link]">
+            <img align="right" src="[logo]" border="0"
+                 alt="logo"/>
+          </a>
+          [end]
+          The currently selected mirror is
+          <b>[preferred]</b>.
+          If you encounter a problem with this mirror,
+          please select another mirror.
+          If all mirrors are failing, there are
+          <i>backup</i>
+          mirrors
+          (at the end of the mirrors list) that should be available.
+        </p>
+
+        <form action="[location]" method="get" id="SelectMirror" class="form-inline">
+          Other mirrors:
+          <select name="Preferred" class="input-xlarge">
+            [if-any http]
+            [for http]
+            <option value="[http]">[http]</option>
+            [end]
+            [end]
+            [if-any ftp]
+            [for ftp]
+            <option value="[ftp]">[ftp]</option>
+            [end]
+            [end]
+            [if-any backup]
+            [for backup]
+            <option value="[backup]">[backup] (backup)</option>
+            [end]
+            [end]
+          </select>
+          <input type="submit" value="Change" class="btn"/>
+        </form>
+
+        <p>
+          You may also consult the
+          <a href="http://www.apache.org/mirrors/">complete list of mirrors.</a>
+        </p>
+
+      </subsection>
+      <subsection name="${releaseName} Sources">
+        <table>
+          <thead>
+            <tr>
+              <th>Artifact</th>
+              <th>Signatures</th>
+            </tr>
+          </thead>
+          <tbody>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip">helix-${releaseName}-src.zip</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.sha1">sha1</a>
+              </td>
+            </tr>
+          </tbody>
+        </table>
+      </subsection>
+      <subsection name="${releaseName} Binaries">
+        <table>
+          <thead>
+            <tr>
+              <th>Artifact</th>
+              <th>Signatures</th>
+            </tr>
+          </thead>
+          <tbody>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar">helix-core-${releaseName}-pkg.tar</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.sha1">sha1</a>
+              </td>
+            </tr>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar">helix-admin-webapp-${releaseName}-pkg.tar</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.sha1">sha1</a>
+              </td>
+            </tr>
+          </tbody>
+        </table>
+      </subsection>
+    </section>
+
+<!--    <section name="Older Releases">
+    </section>-->
+
+    <section name="Verifying Releases">
+      <p>We strongly recommend you verify the integrity of the downloaded files with both PGP and MD5.</p>
+      
+      <p>The PGP signatures can be verified using <a href="http://www.pgpi.org/">PGP</a> or 
+      <a href="http://www.gnupg.org/">GPG</a>. 
+      First download the <a href="http://www.apache.org/dist/incubator/helix/KEYS">KEYS</a> as well as the
+      <tt>*.asc</tt> signature file for the particular distribution. Make sure you get these files from the main 
+      distribution directory, rather than from a mirror. Then verify the signatures using one of the following sets of
+      commands:
+
+        <source>$ pgp -ka KEYS
+$ pgp helix-*.zip.asc</source>
+      
+        <source>$ gpg --import KEYS
+$ gpg --verify helix-*.zip.asc</source>
+       </p>
+    <p>Alternatively, you can verify the MD5 signature on the files. A Unix/Linux program called  
+      <code>md5</code> or 
+      <code>md5sum</code> is included in most distributions.  It is also available as part of
+      <a href="http://www.gnu.org/software/textutils/textutils.html">GNU Textutils</a>.
+      Windows users can get binary md5 programs from these (and likely other) places:
+      <ul>
+        <li>
+          <a href="http://www.md5summer.org/">http://www.md5summer.org/</a>
+        </li>
+        <li>
+          <a href="http://www.fourmilab.ch/md5/">http://www.fourmilab.ch/md5/</a>
+        </li>
+        <li>
+          <a href="http://www.pc-tools.net/win32/md5sums/">http://www.pc-tools.net/win32/md5sums/</a>
+        </li>
+      </ul>
+    </p>
+    </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.7.0-incubating/src/test/conf/testng.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.7.0-incubating/src/test/conf/testng.xml b/site-releases/0.7.0-incubating/src/test/conf/testng.xml
new file mode 100644
index 0000000..58f0803
--- /dev/null
+++ b/site-releases/0.7.0-incubating/src/test/conf/testng.xml
@@ -0,0 +1,27 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
+<suite name="Suite" parallel="none">
+  <test name="Test" preserve-order="false">
+    <packages>
+      <package name="org.apache.helix"/>
+    </packages>
+  </test>
+</suite>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/pom.xml b/site-releases/pom.xml
index a30b305..bfdb1f4 100644
--- a/site-releases/pom.xml
+++ b/site-releases/pom.xml
@@ -31,6 +31,9 @@ under the License.
 
   <modules>
     <module>0.6.1-incubating</module>
+    <module>0.6.2-incubating</module>
+    <module>0.7.0-incubating</module>
+    <module>trunk</module>
   </modules>
 
   <properties>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/trunk/pom.xml b/site-releases/trunk/pom.xml
new file mode 100644
index 0000000..1ccdf0d
--- /dev/null
+++ b/site-releases/trunk/pom.xml
@@ -0,0 +1,51 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <groupId>org.apache.helix</groupId>
+    <artifactId>site-releases</artifactId>
+    <version>0.7.1-incubating-SNAPSHOT</version>
+  </parent>
+
+  <artifactId>trunk-site</artifactId>
+  <packaging>bundle</packaging>
+  <name>Apache Helix :: Site :: trunk</name>
+
+  <properties>
+  </properties>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.testng</groupId>
+      <artifactId>testng</artifactId>
+      <version>6.0.1</version>
+    </dependency>
+  </dependencies>
+  <build>
+    <pluginManagement>
+      <plugins>
+      </plugins>
+    </pluginManagement>
+    <plugins>
+    </plugins>
+  </build>
+</project>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/apt/privacy-policy.apt
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/apt/privacy-policy.apt b/site-releases/trunk/src/site/apt/privacy-policy.apt
new file mode 100644
index 0000000..ada9363
--- /dev/null
+++ b/site-releases/trunk/src/site/apt/privacy-policy.apt
@@ -0,0 +1,52 @@
+ ----
+ Privacy Policy
+ -----
+ Olivier Lamy
+ -----
+ 2013-02-04
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one
+~~ or more contributor license agreements.  See the NOTICE file
+~~ distributed with this work for additional information
+~~ regarding copyright ownership.  The ASF licenses this file
+~~ to you under the Apache License, Version 2.0 (the
+~~ "License"); you may not use this file except in compliance
+~~ with the License.  You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing,
+~~ software distributed under the License is distributed on an
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+~~ KIND, either express or implied.  See the License for the
+~~ specific language governing permissions and limitations
+~~ under the License.
+
+Privacy Policy
+
+  Information about your use of this website is collected using server access logs and a tracking cookie. The 
+  collected information consists of the following:
+
+  [[1]] The IP address from which you access the website;
+  
+  [[2]] The type of browser and operating system you use to access our site;
+  
+  [[3]] The date and time you access our site;
+  
+  [[4]] The pages you visit; and
+  
+  [[5]] The addresses of pages from where you followed a link to our site.
+
+  []
+
+  Part of this information is gathered using a tracking cookie set by the 
+  {{{http://www.google.com/analytics/}Google Analytics}} service and handled by Google as described in their 
+  {{{http://www.google.com/privacy.html}privacy policy}}. See your browser documentation for instructions on how to 
+  disable the cookie if you prefer not to share this data with Google.
+
+  We use the gathered information to help us make our site more useful to visitors and to better understand how and 
+  when our site is used. We do not track or collect personally identifiable information or associate gathered data 
+  with any personally identifying information from other sources.
+
+  By using this website, you consent to the collection of this data in the manner and for the purpose described above.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/apt/releasing.apt
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/apt/releasing.apt b/site-releases/trunk/src/site/apt/releasing.apt
new file mode 100644
index 0000000..11d0cd9
--- /dev/null
+++ b/site-releases/trunk/src/site/apt/releasing.apt
@@ -0,0 +1,107 @@
+ -----
+ Helix release process
+ -----
+ -----
+ 2012-12-15
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one
+~~ or more contributor license agreements.  See the NOTICE file
+~~ distributed with this work for additional information
+~~ regarding copyright ownership.  The ASF licenses this file
+~~ to you under the Apache License, Version 2.0 (the
+~~ "License"); you may not use this file except in compliance
+~~ with the License.  You may obtain a copy of the License at
+~~
+~~   http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing,
+~~ software distributed under the License is distributed on an
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+~~ KIND, either express or implied.  See the License for the
+~~ specific language governing permissions and limitations
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Helix release process
+
+ [[1]] Post to the dev list a few days before you plan to do an Helix release
+
+ [[2]] Your maven setting must contains the entry to be able to deploy.
+
+ ~/.m2/settings.xml
+
++-------------
+   <server>
+     <id>apache.releases.https</id>
+     <username></username>
+     <password></password>
+   </server>
++-------------
+
+ [[3]] Apache DAV passwords
+
++-------------
+ Add the following info into your ~/.netrc
+ machine git-wip-us.apache.org login <apache username> <password>
+
++-------------
+ [[4]] Release Helix
+    You should have a GPG agent running in the session you will run the maven release commands(preferred), and confirm it works by running "gpg -ab" (type some text and press Ctrl-D).
+    If you do not have a GPG agent running, make sure that you have the "apache-release" profile set in your settings.xml as shown below.
+
+   Run the release
+
++-------------
+mvn release:prepare release:perform -B
++-------------
+
+  GPG configuration in maven settings xml:
+
++-------------
+<profile>
+  <id>apache-release</id>
+  <properties>
+    <gpg.passphrase>[GPG_PASSWORD]</gpg.passphrase>
+  </properties>
+</profile>
++-------------
+
+ [[4]] go to https://repository.apache.org and close your staged repository. Note the repository url (format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
+
++-------------
+svn co https://dist.apache.org/repos/dist/dev/incubator/helix helix-dev-release
+cd helix-dev-release
+sh ./release-script-svn.sh version stagingRepoUrl
+then svn add <new directory created with new version as name>
+then svn ci 
++-------------
+
+ [[5]] Validating the release
+
++-------------
+  * Download sources, extract, build and run tests - mvn clean package
+  * Verify license headers - mvn -Prat -DskipTests
+  * Download binaries and .asc files
+  * Download release manager's public key - From the KEYS file, get the release manager's public key finger print and run  gpg --keyserver pgpkeys.mit.edu --recv-key <key>
+  * Validate authenticity of key - run  gpg --fingerprint <key>
+  * Check signatures of all the binaries using gpg <binary>
++-------------
+
+ [[6]] Call for a vote in the dev list and wait for 72 hrs. for the vote results. 3 binding votes are necessary for the release to be finalized. example
+  After the vote has passed, move the files from dist dev to dist release: svn mv https://dist.apache.org/repos/dist/dev/incubator/helix/version to https://dist.apache.org/repos/dist/release/incubator/helix/
+
+ [[7]] Prepare release note. Add a page in src/site/apt/releasenotes/ and change value of \<currentRelease> in parent pom.
+
+
+ [[8]] Send out an announcement of the release to:
+
+  * users@helix.incubator.apache.org
+
+  * dev@helix.incubator.apache.org
+
+ [[9]] Celebrate !
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/Architecture.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Architecture.md b/site-releases/trunk/src/site/markdown/Architecture.md
new file mode 100644
index 0000000..933e917
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/Architecture.md
@@ -0,0 +1,252 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Architecture</title>
+</head>
+
+Architecture
+----------------------------
+Helix aims to provide the following abilities to a distributed system:
+
+* Automatic management of a cluster hosting partitioned, replicated resources.
+* Soft and hard failure detection and handling.
+* Automatic load balancing via smart placement of resources on servers(nodes) based on server capacity and resource profile (size of partition, access patterns, etc).
+* Centralized config management and self discovery. Eliminates the need to modify config on each node.
+* Fault tolerance and optimized rebalancing during cluster expansion.
+* Manages entire operational lifecycle of a node. Addition, start, stop, enable/disable without downtime.
+* Monitor cluster health and provide alerts on SLA violation.
+* Service discovery mechanism to route requests.
+
+To build such a system, we need a mechanism to co-ordinate between different nodes and other components in the system. This mechanism can be achieved with software that reacts to any change in the cluster and comes up with a set of tasks needed to bring the cluster to a stable state. The set of tasks will be assigned to one or more nodes in the cluster. Helix serves this purpose of managing the various components in the cluster.
+
+![Helix Design](images/system.png)
+
+Distributed System Components
+
+In general any distributed system cluster will have the following components and properties:
+
+* A set of nodes also referred to as instances.
+* A set of resources which can be databases, lucene indexes or tasks.
+* Each resource is also partitioned into one or more Partitions. 
+* Each partition may have one or more copies called replicas.
+* Each replica can have a state associated with it. For example Master, Slave, Leader, Standby, Online, Offline etc
+
+Roles
+-----
+
+![Helix Design](images/HELIX-components.png)
+
+Not all nodes in a distributed system will perform similar functionalities. For example, a few nodes might be serving requests and a few nodes might be sending requests, and some nodes might be controlling the nodes in the cluster. Thus, Helix categorizes nodes by their specific roles in the system.
+
+We have divided Helix nodes into 3 logical components based on their responsibilities:
+
+1. Participant: The nodes that actually host the distributed resources.
+2. Spectator: The nodes that simply observe the Participant state and route the request accordingly. Routers, for example, need to know the instance on which a partition is hosted and its state in order to route the request to the appropriate end point.
+3. Controller: The controller observes and controls the Participant nodes. It is responsible for coordinating all transitions in the cluster and ensuring that state constraints are satisfied and cluster stability is maintained. 
+
+These are simply logical components and can be deployed as per the system requirements. For example:
+
+1. The controller can be deployed as a separate service
+2. The controller can be deployed along with a Participant but only one Controller will be active at any given time.
+
+Both have pros and cons, which will be discussed later and one can chose the mode of deployment as per system needs.
+
+
+## Cluster state metadata store
+
+We need a distributed store to maintain the state of the cluster and a notification system to notify if there is any change in the cluster state. Helix uses Zookeeper to achieve this functionality.
+
+Zookeeper provides:
+
+* A way to represent PERSISTENT state which basically remains until its deleted.
+* A way to represent TRANSIENT/EPHEMERAL state which vanishes when the process that created the state dies.
+* Notification mechanism when there is a change in PERSISTENT and EPHEMERAL state
+
+The namespace provided by ZooKeeper is much like that of a standard file system. A name is a sequence of path elements separated by a slash (/). Every node[ZNode] in ZooKeeper\'s namespace is identified by a path.
+
+More info on Zookeeper can be found at http://zookeeper.apache.org
+
+## State machine and constraints
+
+Even though the concepts of Resources, Partitions, and Replicas are common to most distributed systems, one thing that differentiates one distributed system from another is the way each partition is assigned a state and the constraints on each state.
+
+For example:
+
+1. If a system is serving read-only data then all partition\'s replicas are equal and they can either be ONLINE or OFFLINE.
+2. If a system takes _both_ reads and writes but ensure that writes go through only one partition, the states will be MASTER, SLAVE, and OFFLINE. Writes go through the MASTER and replicate to the SLAVEs. Optionally, reads can go through SLAVES.
+
+Apart from defining state for each partition, the transition path to each state can be application specific. For example, in order to become MASTER it might be a requirement to first become a SLAVE. This ensures that if the SLAVE does not have the data as part of OFFLINE-SLAVE transition it can bootstrap data from other nodes in the system.
+
+Helix provides a way to configure an application specific state machine along with constraints on each state. Along with constraints on STATE, Helix also provides a way to specify constraints on transitions.  (More on this later.)
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+![Helix Design](images/statemachine.png)
+
+## Concepts
+
+The following terminologies are used in Helix to model a state machine.
+
+* IdealState: The state in which we need the cluster to be in if all nodes are up and running. In other words, all state constraints are satisfied.
+* CurrentState: Represents the actual current state of each node in the cluster 
+* ExternalView: Represents the combined view of CurrentState of all nodes.  
+
+The goal of Helix is always to make the CurrentState of the system same as the IdealState. Some scenarios where this may not be true are:
+
+* When all nodes are down
+* When one or more nodes fail
+* New nodes are added and the partitions need to be reassigned
+
+### IdealState
+
+Helix lets the application define the IdealState on a resource basis which basically consists of:
+
+* List of partitions. Example: 64
+* Number of replicas for each partition. Example: 3
+* Node and State for each replica.
+
+Example:
+
+* Partition-1, replica-1, Master, Node-1
+* Partition-1, replica-2, Slave, Node-2
+* Partition-1, replica-3, Slave, Node-3
+* .....
+* .....
+* Partition-p, replica-3, Slave, Node-n
+
+Helix comes with various algorithms to automatically assign the partitions to nodes. The default algorithm minimizes the number of shuffles that happen when new nodes are added to the system.
+
+### CurrentState
+
+Every instance in the cluster hosts one or more partitions of a resource. Each of the partitions has a state associated with it.
+
+Example Node-1
+
+* Partition-1, Master
+* Partition-2, Slave
+* ....
+* ....
+* Partition-p, Slave
+
+### ExternalView
+
+External clients needs to know the state of each partition in the cluster and the Node hosting that partition. Helix provides one view of the system to Spectators as _ExternalView_. ExternalView is simply an aggregate of all node CurrentStates.
+
+* Partition-1, replica-1, Master, Node-1
+* Partition-1, replica-2, Slave, Node-2
+* Partition-1, replica-3, Slave, Node-3
+* .....
+* .....
+* Partition-p, replica-3, Slave, Node-n
+
+## Process Workflow
+
+Mode of operation in a cluster
+
+A node process can be one of the following:
+
+* Participant: The process registers itself in the cluster and acts on the messages received in its queue and updates the current state.  Example: a storage node in a distributed database
+* Spectator: The process is simply interested in the changes in the Externalview.
+* Controller: This process actively controls the cluster by reacting to changes in cluster state and sending messages to Participants.
+
+
+### Participant Node Process
+
+* When Node starts up, it registers itself under _LiveInstances_
+* After registering, it waits for new _Messages_ in the message queue
+* When it receives a message, it will perform the required task as indicated in the message
+* After the task is completed, depending on the task outcome it updates the CurrentState
+
+### Controller Process
+
+* Watches IdealState
+* Notified when a node goes down/comes up or node is added/removed. Watches LiveInstances and CurrentState of each node in the cluster
+* Triggers appropriate state transitions by sending message to Participants
+
+### Spectator Process
+
+* When the process starts, it asks the Helix agent to be notified of changes in ExternalView
+* Whenever it receives a notification, it reads the Externalview and performs required duties.
+
+#### Interaction between controller, participant and spectator
+
+The following picture shows how controllers, participants and spectators interact with each other.
+
+![Helix Architecture](images/helix-architecture.png)
+
+## Core algorithm
+
+* Controller gets the IdealState and the CurrentState of active storage nodes from Zookeeper
+* Compute the delta between IdealState and CurrentState for each partition across all participant nodes
+* For each partition compute tasks based on the State Machine Table. It\'s possible to configure priority on the state Transition. For example, in case of Master-Slave:
+    * Attempt mastership transfer if possible without violating constraint.
+    * Partition Addition
+    * Drop Partition 
+* Add the tasks in parallel if possible to the respective queue for each storage node (if the tasks added are mutually independent)
+* If a task is dependent on another task being completed, do not add that task
+* After any task is completed by a Participant, Controllers gets notified of the change and the State Transition algorithm is re-run until the CurrentState is same as IdealState.
+
+## Helix ZNode layout
+
+Helix organizes znodes under clusterName in multiple levels. 
+
+The top level (under the cluster name) ZNodes are all Helix-defined and in upper case:
+
+* PROPERTYSTORE: application property store
+* STATEMODELDEFES: state model definitions
+* INSTANCES: instance runtime information including current state and messages
+* CONFIGS: configurations
+* IDEALSTATES: ideal states
+* EXTERNALVIEW: external views
+* LIVEINSTANCES: live instances
+* CONTROLLER: cluster controller runtime information
+
+Under INSTANCES, there are runtime ZNodes for each instance. An instance organizes ZNodes as follows:
+
+* CURRENTSTATES
+    * sessionId
+    * resourceName
+* ERRORS
+* STATUSUPDATES
+* MESSAGES
+* HEALTHREPORT
+
+Under CONFIGS, there are different scopes of configurations:
+
+* RESOURCE: contains resource scope configurations
+* CLUSTER: contains cluster scope configurations
+* PARTICIPANT: contains participant scope configurations
+
+The following image shows an example of Helix znodes layout for a cluster named "test-cluster":
+
+![Helix znode layout](images/helix-znode-layout.png)

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/Building.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Building.md b/site-releases/trunk/src/site/markdown/Building.md
new file mode 100644
index 0000000..2d8a51b
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/Building.md
@@ -0,0 +1,29 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Build Instructions
+------------------
+
+Requirements: JDK 1.6+, Maven 2.0.8+
+
+```
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+mvn install package -DskipTests
+```

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Concepts.md b/site-releases/trunk/src/site/markdown/Concepts.md
new file mode 100644
index 0000000..fa5d0ba
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/Concepts.md
@@ -0,0 +1,275 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Concepts</title>
+</head>
+
+Concepts
+----------------------------
+
+Helix is based on the idea that a given task has the following attributes associated with it:
+
+* _Location of the task_. For example it runs on Node N1
+* _State_. For example, it is running, stopped etc.
+
+In Helix terminology, a task is referred to as a _resource_.
+
+### IdealState
+
+IdealState simply allows one to map tasks to location and state. A standard way of expressing this in Helix:
+
+```
+  "TASK_NAME" : {
+    "LOCATION" : "STATE"
+  }
+
+```
+Consider a simple case where you want to launch a task \'myTask\' on node \'N1\'. The IdealState for this can be expressed as follows:
+
+```
+{
+  "id" : "MyTask",
+  "mapFields" : {
+    "myTask" : {
+      "N1" : "ONLINE",
+    }
+  }
+}
+```
+### Partition
+
+If this task get too big to fit on one box, you might want to divide it into subtasks. Each subtask is referred to as a _partition_ in Helix. Let\'s say you want to divide the task into 3 subtasks/partitions, the IdealState can be changed as shown below. 
+
+\'myTask_0\', \'myTask_1\', \'myTask_2\' are logical names representing the partitions of myTask. Each tasks runs on N1, N2 and N3 respectively.
+
+```
+{
+  "id" : "myTask",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+  }
+ "mapFields" : {
+    "myTask_0" : {
+      "N1" : "ONLINE",
+    },
+    "myTask_1" : {
+      "N2" : "ONLINE",
+    },
+    "myTask_2" : {
+      "N3" : "ONLINE",
+    }
+  }
+}
+```
+
+### Replica
+
+Partitioning allows one to split the data/task into multiple subparts. But let\'s say the request rate for each partition increases. The common solution is to have multiple copies for each partition. Helix refers to the copy of a partition as a _replica_.  Adding a replica also increases the availability of the system during failures. One can see this methodology employed often in search systems. The index is divided into shards, and each shard has multiple copies.
+
+Let\'s say you want to add one additional replica for each task. The IdealState can simply be changed as shown below. 
+
+For increasing the availability of the system, it\'s better to place the replica of a given partition on different nodes.
+
+```
+{
+  "id" : "myIndex",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+  },
+ "mapFields" : {
+    "myIndex_0" : {
+      "N1" : "ONLINE",
+      "N2" : "ONLINE"
+    },
+    "myIndex_1" : {
+      "N2" : "ONLINE",
+      "N3" : "ONLINE"
+    },
+    "myIndex_2" : {
+      "N3" : "ONLINE",
+      "N1" : "ONLINE"
+    }
+  }
+}
+```
+
+### State 
+
+Now let\'s take a slightly more complicated scenario where a task represents a database.  Unlike an index which is in general read-only, a database supports both reads and writes. Keeping the data consistent among the replicas is crucial in distributed data stores. One commonly applied technique is to assign one replica as the MASTER and remaining replicas as SLAVEs. All writes go to the MASTER and are then replicated to the SLAVE replicas.
+
+Helix allows one to assign different states to each replica. Let\'s say you have two MySQL instances N1 and N2, where one will serve as MASTER and another as SLAVE. The IdealState can be changed to:
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+
+### State Machine and Transitions
+
+IdealState allows one to exactly specify the desired state of the cluster. Given an IdealState, Helix takes up the responsibility of ensuring that the cluster reaches the IdealState.  The Helix _controller_ reads the IdealState and then commands each Participant to take appropriate actions to move from one state to another until it matches the IdealState.  These actions are referred to as _transitions_ in Helix.
+
+The next logical question is:  how does the _controller_ compute the transitions required to get to IdealState?  This is where the finite state machine concept comes in. Helix allows applications to plug in a finite state machine.  A state machine consists of the following:
+
+* State: Describes the role of a replica
+* Transition: An action that allows a replica to move from one state to another, thus changing its role.
+
+Here is an example of MasterSlave state machine:
+
+```
+          OFFLINE  | SLAVE  |  MASTER  
+         _____________________________
+        |          |        |         |
+OFFLINE |   N/A    | SLAVE  | SLAVE   |
+        |__________|________|_________|
+        |          |        |         |
+SLAVE   |  OFFLINE |   N/A  | MASTER  |
+        |__________|________|_________|
+        |          |        |         |
+MASTER  | SLAVE    | SLAVE  |   N/A   |
+        |__________|________|_________|
+
+```
+
+Helix allows each resource to be associated with one state machine. This means you can have one resource as an index and another as a database in the same cluster. One can associate each resource with a state machine as follows:
+
+```
+{
+  "id" : "myDB",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "1",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "myDB" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    }
+  }
+}
+
+```
+
+### Current State
+
+CurrentState of a resource simply represents its actual state at a Participant. In the below example:
+
+* INSTANCE_NAME: Unique name representing the process
+* SESSION_ID: ID that is automatically assigned every time a process joins the cluster
+
+```
+{
+  "id":"MyResource"
+  ,"simpleFields":{
+    ,"SESSION_ID":"13d0e34675e0002"
+    ,"INSTANCE_NAME":"node1"
+    ,"STATE_MODEL_DEF":"MasterSlave"
+  }
+  ,"mapFields":{
+    "MyResource_0":{
+      "CURRENT_STATE":"SLAVE"
+    }
+    ,"MyResource_1":{
+      "CURRENT_STATE":"MASTER"
+    }
+    ,"MyResource_2":{
+      "CURRENT_STATE":"MASTER"
+    }
+  }
+}
+```
+Each node in the cluster has its own CurrentState.
+
+### External View
+
+In order to communicate with the Participants, external clients need to know the current state of each of the Participants. The external clients are referred to as Spectators. In order to make the life of Spectator simple, Helix provides an ExternalView that is an aggregated view of the current state across all nodes. The ExternalView has a similar format as IdealState.
+
+```
+{
+  "id":"MyResource",
+  "mapFields":{
+    "MyResource_0":{
+      "N1":"SLAVE",
+      "N2":"MASTER",
+      "N3":"OFFLINE"
+    },
+    "MyResource_1":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"ERROR"
+    },
+    "MyResource_2":{
+      "N1":"MASTER",
+      "N2":"SLAVE",
+      "N3":"SLAVE"
+    }
+  }
+}
+```
+
+### Rebalancer
+
+The core component of Helix is the Controller which runs the Rebalancer algorithm on every cluster event. Cluster events can be one of the following:
+
+* Nodes start/stop and soft/hard failures
+* New nodes are added/removed
+* Ideal state changes
+
+There are few more examples such as configuration changes, etc.  The key takeaway: there are many ways to trigger the rebalancer.
+
+When a rebalancer is run it simply does the following:
+
+* Compares the IdealState and current state
+* Computes the transitions required to reach the IdealState
+* Issues the transitions to each Participant
+
+The above steps happen for every change in the system. Once the current state matches the IdealState, the system is considered stable which implies \'IdealState = CurrentState = ExternalView\'
+
+### Dynamic IdealState
+
+One of the things that makes Helix powerful is that IdealState can be changed dynamically. This means one can listen to cluster events like node failures and dynamically change the ideal state. Helix will then take care of triggering the respective transitions in the system.
+
+Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
+
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/Features.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Features.md b/site-releases/trunk/src/site/markdown/Features.md
new file mode 100644
index 0000000..ba9d0e7
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/Features.md
@@ -0,0 +1,313 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Features</title>
+</head>
+
+Features
+----------------------------
+
+
+### CONFIGURING IDEALSTATE
+
+
+Read concepts page for definition of Idealstate.
+
+The placement of partitions in a DDS is very critical for reliability and scalability of the system. 
+For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can guarantee this.
+Helix by default comes with a variant of consistent hashing based of the RUSH algorithm. 
+
+This means given a number of partitions, replicas and number of nodes Helix does the automatic assignment of partition to nodes such that
+
+* Each node has the same number of partitions and replicas of the same partition do not stay on the same node.
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above two criteria.
+
+
+Helix provides multiple ways to control the placement and state of a replica. 
+
+```
+
+            |AUTO REBALANCE|   AUTO     |   CUSTOM  |       
+            -----------------------------------------
+   LOCATION | HELIX        |  APP       |  APP      |
+            -----------------------------------------
+      STATE | HELIX        |  HELIX     |  APP      |
+            -----------------------------------------
+```
+
+#### HELIX EXECUTION MODE 
+
+
+Idealstate is defined as the state of the DDS when all nodes are up and running and healthy. 
+Helix uses this as the target state of the system and computes the appropriate transitions needed in the system to bring it to a stable state. 
+
+Helix supports 3 different execution modes which allows application to explicitly control the placement and state of the replica.
+
+##### AUTO_REBALANCE
+
+When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. Example
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will internally compute the ideal state as 
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently alive processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+When one node fails Helix redistributes its 15 tasks to the remaining 3 nodes. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node. 
+
+#### AUTO
+
+When the idealstate mode is set to AUTO, Helix only controls STATE of the replicas where as the location of the partition is controlled by application. Example: The below idealstate indicates thats 'MyResource_0' must be only on node1 and node2.  But gives the control of assigning the STATE to Helix.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "IDEAL_STATE_MODE" : "AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0 in N2 based on the system constraints. For example, if a system constraint specified that there should be 1 Master and if the Master failed, then node2 will be made the new master. 
+
+#### CUSTOM
+
+Helix offers a third mode called CUSTOM, in which application can completely control the placement and state of each replica. Applications will have to implement an interface that Helix will invoke when the cluster state changes. 
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that Idealstate and Currentstate converges.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+      "IDEAL_STATE_MODE" : "CUSTOM",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+For example, the current state of the system might be 'MyResource_0' -> {N1:MASTER,N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel since it might result in a transient state where both N1 and N2 are masters.
+Helix will first issue MASTER-->SLAVE to N1 and after its completed it will issue SLAVE-->MASTER to N2. 
+ 
+
+### State Machine Configuration
+
+Helix comes with 3 default state models that are most commonly used. Its possible to have multiple state models in a cluster. 
+Every resource that is added should have a reference to the state model. 
+
+* MASTER-SLAVE: Has 3 states OFFLINE,SLAVE,MASTER. Max masters is 1. Slaves will be based on the replication factor. Replication factor can be specified while adding the resource
+* ONLINE-OFFLINE: Has 2 states OFFLINE and ONLINE. Very simple state model and most applications start off with this state model.
+* LEADER-STANDBY:1 Leader and many stand bys. In general the standby's are idle.
+
+Apart from providing the state machine configuration, one can specify the constraints of states and transitions.
+
+For example one can say
+Master:1. Max number of replicas in Master state at any time is 1.
+OFFLINE-SLAVE:5 Max number of Offline-Slave transitions that can happen concurrently in the system
+
+STATE PRIORITY
+Helix uses greedy approach to satisfy the state constraints. For example if the state machine configuration says it needs 1 master and 2 slaves but only 1 node is active, Helix must promote it to master. This behavior is achieved by providing the state priority list as MASTER,SLAVE.
+
+STATE TRANSITION PRIORITY
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. 
+One can control this by overriding the priority order.
+ 
+### Config management
+
+Helix allows applications to store application specific properties. The configuration can have different scopes.
+
+* Cluster
+* Node specific
+* Resource specific
+* Partition specific
+
+Helix also provides notifications when any configs are changed. This allows applications to support dynamic configuration changes.
+
+See HelixManager.getConfigAccessor for more info
+
+### Intra cluster messaging api
+
+This is an interesting feature which is quite useful in practice. Often times, nodes in DDS requires a mechanism to interact with each other. One such requirement is a process of bootstrapping a replica.
+
+Consider a search system use case where the index replica starts up and it does not have an index. One of the commonly used solutions is to get the index from a common location or to copy the index from another replica.
+Helix provides a messaging api, that can be used to talk to other nodes in the system. The value added that Helix provides here is, message recipient can be specified in terms of resource, 
+partition, state and Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of P1. 
+Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic api and can also be used to schedule various periodic tasks in the cluster like data backups etc. 
+System Admins can also perform adhoc tasks like on demand backup or execute a system command(like rm -rf ;-)) across all nodes.
+
+```
+      ClusterMessagingService messagingService = manager.getMessagingService();
+      //CONSTRUCT THE MESSAGE
+      Message requestBackupUriRequest = new Message(
+          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+      requestBackupUriRequest
+          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+      requestBackupUriRequest.setMsgState(MessageState.NEW);
+      //SET THE RECIPIENT CRITERIA, All nodes that satisfy the criteria will receive the message
+      Criteria recipientCriteria = new Criteria();
+      recipientCriteria.setInstanceName("%");
+      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      recipientCriteria.setResource("MyDB");
+      recipientCriteria.setPartition("");
+      //Should be processed only the process that is active at the time of sending the message. 
+      //This means if the recipient is restarted after message is sent, it will not be processed.
+      recipientCriteria.setSessionSpecific(true);
+      // wait for 30 seconds
+      int timeout = 30000;
+      //The handler that will be invoked when any recipient responds to the message.
+      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+      //This will return only after all recipients respond or after timeout.
+      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+          requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.getMessagingService for more info.
+
+
+### Application specific property storage
+
+There are several usecases where applications needs support for distributed data structures. Helix uses Zookeeper to store the application data and hence provides notifications when the data changes. 
+One value add Helix provides is the ability to specify cache the data and also write through cache. This is more efficient than reading from ZK every time.
+
+See HelixManager.getHelixPropertyStore
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight but some might involve moving data around which is quite expensive.
+Helix allows applications to set threshold on transitions. The threshold can be set at the multiple scopes.
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per node max transitions in parallel.
+
+See HelixManager.getHelixAdmin.addMessageConstraint() 
+
+### Health monitoring and alerting
+
+This in currently in development mode, not yet productionized.
+
+Helix provides ability for each node in the system to report health metrics on a periodic basis. 
+Helix supports multiple ways to aggregate these metrics like simple SUM, AVG, EXPONENTIAL DECAY, WINDOW. Helix will only persist the aggregated value.
+Applications can define threshold on the aggregate values according to the SLA's and when the SLA is violated Helix will fire an alert. 
+Currently Helix only fires an alert but eventually we plan to use this metrics to either mark the node dead or load balance the partitions. 
+This feature will be valuable in for distributed systems that support multi-tenancy and have huge variation in work load patterns. Another place this can be used is to detect skewed partitions and rebalance the cluster.
+
+This feature is not yet stable and do not recommend to be used in production.
+
+
+### Controller deployment modes
+
+Read Architecture wiki for more details on the Role of a controller. In simple words, it basically controls the participants in the cluster by issuing transitions.
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+Controller can be started as a separate process to manage a cluster. This is the recommended approach. How ever since one controller can be a single point of failure, multiple controller processes are required for reliability.
+Even if multiple controllers are running only one will be actively managing the cluster at any time and is decided by a leader election process. If the leader fails, another leader will resume managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participant. 
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool feature we added in helix was use a set of controllers to manage a large number of clusters. 
+For example if you have X clusters to be managed, instead of deploying X*3(3 controllers for fault tolerance) controllers for each cluster, one can deploy only 3 controllers. Each controller can manage X/3 clusters. 
+If any controller fails the remaining two will manage X/2 clusters. At LinkedIn, we always deploy controllers in this mode. 
+
+
+
+
+
+
+
+ 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/markdown/Quickstart.md b/site-releases/trunk/src/site/markdown/Quickstart.md
new file mode 100644
index 0000000..348f58a
--- /dev/null
+++ b/site-releases/trunk/src/site/markdown/Quickstart.md
@@ -0,0 +1,621 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Quickstart</title>
+</head>
+
+Get Helix
+---------
+
+First, let\'s get Helix, either build it, or download.
+
+### Build
+
+    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+    cd incubator-helix
+    ./build
+    cd helix-core/target/helix-core-pkg/bin //This folder contains all the scripts used in following sections
+    chmod +x *
+
+Overview
+--------
+
+In this Quickstart, we\'ll set up a master-slave replicated, partitioned system.  Then we\'ll demonstrate how to add a node, rebalance the partitions, and show how Helix manages failover.
+
+
+Let\'s Do It
+------------
+
+Helix provides command line interfaces to set up the cluster and view the cluster state. The best way to understand how Helix views a cluster is to build a cluster.
+
+#### First, get to the tools directory
+
+If you built the code
+
+```
+cd incubator-helix/helix-core/target/helix-core-pkg/bin
+```
+
+If you downloaded the release package, extract it.
+
+
+Short Version
+-------------
+You can observe the components working together in this demo, which does the following:
+
+* Create a cluster
+* Add 2 nodes (participants) to the cluster
+* Set up a resource with 6 partitions and 2 replicas: 1 Master, and 1 Slave per partition
+* Show the cluster state after Helix balances the partitions
+* Add a third node
+* Show the cluster state.  Note that the third node has taken mastership of 2 partitions.
+* Kill the third node (Helix takes care of failover)
+* Show the cluster state.  Note that the two surviving nodes take over mastership of the partitions from the failed node
+
+##### Run the demo
+
+```
+cd incubator-helix/helix-core/target/helix-core-pkg/bin
+./quickstart.sh
+```
+
+##### 2 nodes are set up and the partitions rebalanced
+
+The cluster state is as follows:
+
+```
+CLUSTER STATE: After starting 2 nodes
+	                     localhost_12000	localhost_12001	
+	       MyResource_0	M			S		
+	       MyResource_1	S			M		
+	       MyResource_2	M			S		
+	       MyResource_3	M			S		
+	       MyResource_4	S			M  
+	       MyResource_5	S			M  
+```
+
+Note there is one master and one slave per partition.
+
+##### A third node is added and the cluster rebalanced
+
+The cluster state changes to:
+
+```
+CLUSTER STATE: After adding a third node
+                 	       localhost_12000	    localhost_12001	localhost_12002	
+	       MyResource_0	    S			  M		      S		
+	       MyResource_1	    S			  S		      M	 
+	       MyResource_2	    M			  S	              S  
+	       MyResource_3	    S			  S                   M  
+	       MyResource_4	    M			  S	              S  
+	       MyResource_5	    S			  M                   S  
+```
+
+Note there is one master and _two_ slaves per partition.  This is expected because there are three nodes.
+
+##### Finally, a node is killed to simulate a failure
+
+Helix makes sure each partition has a master.  The cluster state changes to:
+
+```
+CLUSTER STATE: After the 3rd node stops/crashes
+                	       localhost_12000	  localhost_12001	localhost_12002	
+	       MyResource_0	    S			M		      -		
+	       MyResource_1	    S			M		      -	 
+	       MyResource_2	    M			S	              -  
+	       MyResource_3	    M			S                     -  
+	       MyResource_4	    M			S	              -  
+	       MyResource_5	    S			M                     -  
+```
+
+
+Long Version
+------------
+Now you can run the same steps by hand.  In the detailed version, we\'ll do the following:
+
+* Define a cluster
+* Add two nodes to the cluster
+* Add a 6-partition resource with 1 master and 2 slave replicas per partition
+* Verify that the cluster is healthy and inspect the Helix view
+* Expand the cluster: add a few nodes and rebalance the partitions
+* Failover: stop a node and verify the mastership transfer
+
+### Install and Start Zookeeper
+
+Zookeeper can be started in standalone mode or replicated mode.
+
+More info is available at 
+
+* http://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
+* http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_zkMulitServerSetup
+
+In this example, let\'s start zookeeper in local mode.
+
+##### start zookeeper locally on port 2199
+
+    ./start-standalone-zookeeper.sh 2199 &
+
+### Define the Cluster
+
+The helix-admin tool is used for cluster administration tasks. In the Quickstart, we\'ll use the command line interface. Helix supports a REST interface as well.
+
+zookeeper_address is of the format host:port e.g localhost:2199 for standalone or host1:port,host2:port for multi-node.
+
+Next, we\'ll set up a cluster MYCLUSTER cluster with these attributes:
+
+* 3 instances running on localhost at ports 12913,12914,12915 
+* One database named myDB with 6 partitions 
+* Each partition will have 3 replicas with 1 master, 2 slaves
+* zookeeper running locally at localhost:2199
+
+##### Create the cluster MYCLUSTER
+    ## helix-admin.sh --zkSvr <zk_address> --addCluster <clustername> 
+    ./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER 
+
+##### Add nodes to the cluster
+
+In this case we\'ll add three nodes: localhost:12913, localhost:12914, localhost:12915
+
+    ## helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
+
+#### Define the resource and partitioning
+
+In this example, the resource is a database, partitioned 6 ways.  (In a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.)
+
+##### Create a database with 6 partitions using the MasterSlave state model. 
+
+Helix ensures there will be exactly one master for each partition.
+
+    ## helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
+    ./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
+   
+##### Now we can let Helix assign partitions to nodes. 
+
+This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
+
+    ## helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
+    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+
+Now the cluster is defined in Zookeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model).  And the _ideal state_ has been calculated, assuming a replication factor of 3.
+
+##### Start the Helix Controller
+
+Now that the cluster is defined in Zookeeper, the Helix controller can manage the cluster.
+
+    ## Start the cluster manager, which will manage MYCLUSTER
+    ./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
+
+##### Start up the cluster to be managed
+
+We\'ve started up Zookeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
+
+    # start up each instance.  These are mock implementations that are actively managed by Helix
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log 
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
+
+
+#### Inspect the Cluster
+
+Now, let\'s see the Helix view of our cluster.  We\'ll work our way down as follows:
+
+```
+Clusters -> MYCLUSTER -> instances -> instance detail
+                      -> resources -> resource detail
+                      -> partitions
+```
+
+A single Helix controller can manage multiple clusters, though so far, we\'ve only defined one cluster.  Let\'s see:
+
+```
+## List existing clusters
+./helix-admin.sh --zkSvr localhost:2199 --listClusters        
+
+Existing clusters:
+MYCLUSTER
+```
+                                       
+Now, let\'s see the Helix view of MYCLUSTER
+
+```
+## helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName> 
+./helix-admin.sh --zkSvr localhost:2199 --listClusterInfo MYCLUSTER
+
+Existing resources in cluster MYCLUSTER:
+myDB
+Instances in cluster MYCLUSTER:
+localhost_12915
+localhost_12914
+localhost_12913
+```
+
+
+Let\'s look at the details of an instance
+
+```
+## ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>    
+./helix-admin.sh --zkSvr localhost:2199 --listInstanceInfo MYCLUSTER localhost_12913
+
+InstanceConfig: {
+  "id" : "localhost_12913",
+  "mapFields" : {
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "HELIX_ENABLED" : "true",
+    "HELIX_HOST" : "localhost",
+    "HELIX_PORT" : "12913"
+  }
+}
+```
+
+    
+##### Query info of a resource
+
+```
+## helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_1" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_4" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12914", "localhost_12913", "localhost_12915" ],
+    "myDB_1" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12915", "localhost_12914" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
+    "myDB_4" : [ "localhost_12913", "localhost_12914", "localhost_12915" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_1" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "MASTER"
+    },
+    "myDB_4" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12914" : "SLAVE",
+      "localhost_12915" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+Now, let\'s look at one of the partitions:
+
+    ## helix-admin.sh --zkSvr <zk_address> --listPartitionInfo <clusterName> <resource> <partition> 
+    ./helix-admin.sh --zkSvr localhost:2199 --listPartitionInfo MYCLUSTER myDB myDB_0
+
+#### Expand the Cluster
+
+Next, we\'ll show how Helix does the work that you\'d otherwise have to build into your system.  When you add capacity to your cluster, you want the work to be evenly distributed.  In this example, we started with 3 nodes, with 6 partitions.  The partitions were evenly balanced, 2 masters and 4 slaves per node. Let\'s add 3 more nodes: localhost:12916, localhost:12917, localhost:12918
+
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
+    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
+
+And start up these instances:
+
+    # start up each instance.  These are mock implementations that are actively managed by Helix
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
+    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
+
+
+And now, let Helix do the work for you.  To shift the work, simply rebalance.  After the rebalance, each node will have one master and two slaves.
+
+    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
+
+#### View the cluster
+
+OK, let\'s see how it looks:
+
+
+```
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
+    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12917", "localhost_12918" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12917", "localhost_12918" ],
+    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+Mission accomplished.  The partitions are nicely balanced.
+
+#### How about Failover?
+
+Building a fault tolerant system isn\'t trivial, but with Helix, it\'s easy.  Helix detects a failed instance, and triggers mastership transfer automatically.
+
+First, let's fail an instance.  In this example, we\'ll kill localhost:12918 to simulate a failure.
+
+We lost localhost:12918, so myDB_1 lost its MASTER.  Helix can fix that, it will transfer mastership to a healthy node that is currently a SLAVE, say localhost:12197.  Helix balances the load as best as it can, given there are 6 partitions on 5 nodes.  Let\'s see:
+
+
+```
+./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
+
+IdealState for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE",
+      "localhost_12918" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
+    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
+    "myDB_2" : [ "localhost_12913", "localhost_12918", "localhost_12917" ],
+    "myDB_3" : [ "localhost_12915", "localhost_12918", "localhost_12917" ],
+    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
+    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
+  },
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "6",
+    "REPLICAS" : "3",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+  }
+}
+
+ExternalView for myDB:
+{
+  "id" : "myDB",
+  "mapFields" : {
+    "myDB_0" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_1" : {
+      "localhost_12916" : "SLAVE",
+      "localhost_12917" : "MASTER"
+    },
+    "myDB_2" : {
+      "localhost_12913" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_3" : {
+      "localhost_12915" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_4" : {
+      "localhost_12916" : "MASTER",
+      "localhost_12917" : "SLAVE"
+    },
+    "myDB_5" : {
+      "localhost_12913" : "SLAVE",
+      "localhost_12914" : "MASTER",
+      "localhost_12915" : "SLAVE"
+    }
+  },
+  "listFields" : {
+  },
+  "simpleFields" : {
+    "BUCKET_SIZE" : "0"
+  }
+}
+```
+
+As we\'ve seen in this Quickstart, Helix takes care of partitioning, load balancing, elasticity, failure detection and recovery.
+
+##### ZooInspector
+
+You can view all of the underlying data by going direct to zookeeper.  Use ZooInspector that comes with zookeeper to browse the data. This is a java applet (make sure you have X windows)
+
+To start zooinspector run the following command from <zk_install_directory>/contrib/ZooInspector
+      
+    java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
+
+#### Next
+
+Now that you understand the idea of Helix, read the [tutorial](./tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
+


[13/16] git commit: [maven-release-plugin] prepare for next development iteration

Posted by ka...@apache.org.
[maven-release-plugin] prepare for next development iteration


Project: http://git-wip-us.apache.org/repos/asf/incubator-helix/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-helix/commit/48a99a24
Tree: http://git-wip-us.apache.org/repos/asf/incubator-helix/tree/48a99a24
Diff: http://git-wip-us.apache.org/repos/asf/incubator-helix/diff/48a99a24

Branch: refs/heads/master
Commit: 48a99a247053ffc4bd01e7b9ba2001fcfa670c10
Parents: 2c29549
Author: zzhang <zz...@apache.org>
Authored: Thu Nov 14 15:22:54 2013 -0800
Committer: Kanak Biscuitwala <ka...@apache.org>
Committed: Fri Nov 15 14:40:15 2013 -0800

----------------------------------------------------------------------
 helix-admin-webapp/pom.xml                   | 2 +-
 helix-agent/pom.xml                          | 2 +-
 helix-core/pom.xml                           | 2 +-
 helix-examples/pom.xml                       | 2 +-
 pom.xml                                      | 4 ++--
 recipes/distributed-lock-manager/pom.xml     | 2 +-
 recipes/pom.xml                              | 2 +-
 recipes/rabbitmq-consumer-group/pom.xml      | 2 +-
 recipes/rsync-replicated-file-system/pom.xml | 2 +-
 recipes/service-discovery/pom.xml            | 2 +-
 recipes/task-execution/pom.xml               | 4 ++--
 recipes/user-defined-rebalancer/pom.xml      | 2 +-
 site-releases/0.6.1-incubating/pom.xml       | 2 +-
 site-releases/pom.xml                        | 2 +-
 14 files changed, 16 insertions(+), 16 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/helix-admin-webapp/pom.xml
----------------------------------------------------------------------
diff --git a/helix-admin-webapp/pom.xml b/helix-admin-webapp/pom.xml
index 4bd7bef..b4f38b5 100644
--- a/helix-admin-webapp/pom.xml
+++ b/helix-admin-webapp/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/helix-agent/pom.xml
----------------------------------------------------------------------
diff --git a/helix-agent/pom.xml b/helix-agent/pom.xml
index e57f401..7d2a0ce 100644
--- a/helix-agent/pom.xml
+++ b/helix-agent/pom.xml
@@ -22,7 +22,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
   <artifactId>helix-agent</artifactId>
   <packaging>bundle</packaging>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/helix-core/pom.xml
----------------------------------------------------------------------
diff --git a/helix-core/pom.xml b/helix-core/pom.xml
index 07b42c7..6f2aeb9 100644
--- a/helix-core/pom.xml
+++ b/helix-core/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/helix-examples/pom.xml
----------------------------------------------------------------------
diff --git a/helix-examples/pom.xml b/helix-examples/pom.xml
index f1ac3c6..c3b319c 100644
--- a/helix-examples/pom.xml
+++ b/helix-examples/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 577ff10..010023c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -29,7 +29,7 @@ under the License.
 
   <groupId>org.apache.helix</groupId>
   <artifactId>helix</artifactId>
-  <version>0.7.0-incubating</version>
+  <version>0.7.1-incubating-SNAPSHOT</version>
   <packaging>pom</packaging>
   <name>Apache Helix</name>
 
@@ -276,7 +276,7 @@ under the License.
     <connection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-helix.git</connection>
     <developerConnection>scm:git:https://git-wip-us.apache.org/repos/asf/incubator-helix.git</developerConnection>
     <url>https://git-wip-us.apache.org/repos/asf?p=incubator-helix.git;a=summary</url>
-    <tag>helix-0.7.0-incubating</tag>
+    <tag>HEAD</tag>
   </scm>
   <issueManagement>
     <system>jira</system>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/distributed-lock-manager/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/distributed-lock-manager/pom.xml b/recipes/distributed-lock-manager/pom.xml
index e676ac7..f9f6385 100644
--- a/recipes/distributed-lock-manager/pom.xml
+++ b/recipes/distributed-lock-manager/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>distributed-lock-manager</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/pom.xml b/recipes/pom.xml
index ac98a08..70dd2bd 100644
--- a/recipes/pom.xml
+++ b/recipes/pom.xml
@@ -22,7 +22,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
   <groupId>org.apache.helix.recipes</groupId>
   <artifactId>recipes</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/rabbitmq-consumer-group/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/rabbitmq-consumer-group/pom.xml b/recipes/rabbitmq-consumer-group/pom.xml
index ded3b67..a70947d 100644
--- a/recipes/rabbitmq-consumer-group/pom.xml
+++ b/recipes/rabbitmq-consumer-group/pom.xml
@@ -24,7 +24,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>rabbitmq-consumer-group</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/rsync-replicated-file-system/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/rsync-replicated-file-system/pom.xml b/recipes/rsync-replicated-file-system/pom.xml
index 4926489..7d27f1f 100644
--- a/recipes/rsync-replicated-file-system/pom.xml
+++ b/recipes/rsync-replicated-file-system/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>rsync-replicated-file-system</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/service-discovery/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/service-discovery/pom.xml b/recipes/service-discovery/pom.xml
index ccdfb0e..a876614 100644
--- a/recipes/service-discovery/pom.xml
+++ b/recipes/service-discovery/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>service-discovery</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/task-execution/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/task-execution/pom.xml b/recipes/task-execution/pom.xml
index cace962..27464c9 100644
--- a/recipes/task-execution/pom.xml
+++ b/recipes/task-execution/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>task-execution</artifactId>
@@ -39,7 +39,7 @@ under the License.
     <dependency>
       <groupId>org.apache.helix</groupId>
       <artifactId>helix-core</artifactId>
-      <version>0.7.0-incubating</version>
+      <version>0.7.1-incubating-SNAPSHOT</version>
     </dependency>
     <dependency>
       <groupId>log4j</groupId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/recipes/user-defined-rebalancer/pom.xml
----------------------------------------------------------------------
diff --git a/recipes/user-defined-rebalancer/pom.xml b/recipes/user-defined-rebalancer/pom.xml
index aeb6b82..8eba035 100644
--- a/recipes/user-defined-rebalancer/pom.xml
+++ b/recipes/user-defined-rebalancer/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix.recipes</groupId>
     <artifactId>recipes</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>user-defined-rebalancer</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/site-releases/0.6.1-incubating/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.6.1-incubating/pom.xml b/site-releases/0.6.1-incubating/pom.xml
index d515cab..7efc019 100644
--- a/site-releases/0.6.1-incubating/pom.xml
+++ b/site-releases/0.6.1-incubating/pom.xml
@@ -23,7 +23,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>site-releases</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
 
   <artifactId>0.6.1-incubating-site</artifactId>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/48a99a24/site-releases/pom.xml
----------------------------------------------------------------------
diff --git a/site-releases/pom.xml b/site-releases/pom.xml
index fe3905b..bfdb1f4 100644
--- a/site-releases/pom.xml
+++ b/site-releases/pom.xml
@@ -21,7 +21,7 @@ under the License.
   <parent>
     <groupId>org.apache.helix</groupId>
     <artifactId>helix</artifactId>
-    <version>0.7.0-incubating</version>
+    <version>0.7.1-incubating-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <packaging>pom</packaging>


[03/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/site/xdoc/download.xml.vm
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/site/xdoc/download.xml.vm b/site-releases/trunk/src/site/xdoc/download.xml.vm
new file mode 100644
index 0000000..41355db
--- /dev/null
+++ b/site-releases/trunk/src/site/xdoc/download.xml.vm
@@ -0,0 +1,193 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+-->
+#set( $releaseName = "0.7.0-incubating" )
+#set( $releaseDate = "10/31/2013" )
+<document xmlns="http://maven.apache.org/XDOC/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+
+  <properties>
+    <title>Apache Incubator Helix Downloads</title>
+    <author email="dev@helix.incubator.apache.org">Apache Helix Documentation Team</author>
+  </properties>
+
+  <body>
+    <div class="toc_container">
+      <macro name="toc">
+        <param name="class" value="toc"/>
+      </macro>
+    </div>
+    
+    <section name="Introduction">
+      <p>Apache Helix artifacts are distributed in source and binary form under the terms of the
+        <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>.
+        See the included <tt>LICENSE</tt> and <tt>NOTICE</tt> files included in each artifact for additional license 
+        information.
+      </p>
+      <p>Use the links below to download a source distribution of Apache Helix.
+      It is good practice to <a href="#Verifying_Releases">verify the integrity</a> of the distribution files.</p>
+    </section>
+
+    <section name="Release">
+      <p>Release date: ${releaseDate} </p>
+      <p><a href="releasenotes/release-${releaseName}.html">${releaseName} Release notes</a></p>
+      <a name="mirror"/>
+      <subsection name="Mirror">
+
+        <p>
+          [if-any logo]
+          <a href="[link]">
+            <img align="right" src="[logo]" border="0"
+                 alt="logo"/>
+          </a>
+          [end]
+          The currently selected mirror is
+          <b>[preferred]</b>.
+          If you encounter a problem with this mirror,
+          please select another mirror.
+          If all mirrors are failing, there are
+          <i>backup</i>
+          mirrors
+          (at the end of the mirrors list) that should be available.
+        </p>
+
+        <form action="[location]" method="get" id="SelectMirror" class="form-inline">
+          Other mirrors:
+          <select name="Preferred" class="input-xlarge">
+            [if-any http]
+            [for http]
+            <option value="[http]">[http]</option>
+            [end]
+            [end]
+            [if-any ftp]
+            [for ftp]
+            <option value="[ftp]">[ftp]</option>
+            [end]
+            [end]
+            [if-any backup]
+            [for backup]
+            <option value="[backup]">[backup] (backup)</option>
+            [end]
+            [end]
+          </select>
+          <input type="submit" value="Change" class="btn"/>
+        </form>
+
+        <p>
+          You may also consult the
+          <a href="http://www.apache.org/mirrors/">complete list of mirrors.</a>
+        </p>
+
+      </subsection>
+      <subsection name="${releaseName} Sources">
+        <table>
+          <thead>
+            <tr>
+              <th>Artifact</th>
+              <th>Signatures</th>
+            </tr>
+          </thead>
+          <tbody>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip">helix-${releaseName}-src.zip</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/src/helix-${releaseName}-src.zip.sha1">sha1</a>
+              </td>
+            </tr>
+          </tbody>
+        </table>
+      </subsection>
+      <subsection name="${releaseName} Binaries">
+        <table>
+          <thead>
+            <tr>
+              <th>Artifact</th>
+              <th>Signatures</th>
+            </tr>
+          </thead>
+          <tbody>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar">helix-core-${releaseName}-pkg.tar</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-core-${releaseName}-pkg.tar.sha1">sha1</a>
+              </td>
+            </tr>
+            <tr>
+              <td>
+                <a href="[preferred]incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar">helix-admin-webapp-${releaseName}-pkg.tar</a>
+              </td>
+              <td>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.asc">asc</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.md5">md5</a>
+                <a href="http://www.apache.org/dist/incubator/helix/${releaseName}/binaries/helix-admin-webapp-${releaseName}-pkg.tar.sha1">sha1</a>
+              </td>
+            </tr>
+          </tbody>
+        </table>
+      </subsection>
+    </section>
+
+<!--    <section name="Older Releases">
+    </section>-->
+
+    <section name="Verifying Releases">
+      <p>We strongly recommend you verify the integrity of the downloaded files with both PGP and MD5.</p>
+      
+      <p>The PGP signatures can be verified using <a href="http://www.pgpi.org/">PGP</a> or 
+      <a href="http://www.gnupg.org/">GPG</a>. 
+      First download the <a href="http://www.apache.org/dist/incubator/helix/KEYS">KEYS</a> as well as the
+      <tt>*.asc</tt> signature file for the particular distribution. Make sure you get these files from the main 
+      distribution directory, rather than from a mirror. Then verify the signatures using one of the following sets of
+      commands:
+
+        <source>$ pgp -ka KEYS
+$ pgp helix-*.zip.asc</source>
+      
+        <source>$ gpg --import KEYS
+$ gpg --verify helix-*.zip.asc</source>
+       </p>
+    <p>Alternatively, you can verify the MD5 signature on the files. A Unix/Linux program called  
+      <code>md5</code> or 
+      <code>md5sum</code> is included in most distributions.  It is also available as part of
+      <a href="http://www.gnu.org/software/textutils/textutils.html">GNU Textutils</a>.
+      Windows users can get binary md5 programs from these (and likely other) places:
+      <ul>
+        <li>
+          <a href="http://www.md5summer.org/">http://www.md5summer.org/</a>
+        </li>
+        <li>
+          <a href="http://www.fourmilab.ch/md5/">http://www.fourmilab.ch/md5/</a>
+        </li>
+        <li>
+          <a href="http://www.pc-tools.net/win32/md5sums/">http://www.pc-tools.net/win32/md5sums/</a>
+        </li>
+      </ul>
+    </p>
+    </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/trunk/src/test/conf/testng.xml
----------------------------------------------------------------------
diff --git a/site-releases/trunk/src/test/conf/testng.xml b/site-releases/trunk/src/test/conf/testng.xml
new file mode 100644
index 0000000..58f0803
--- /dev/null
+++ b/site-releases/trunk/src/test/conf/testng.xml
@@ -0,0 +1,27 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<!DOCTYPE suite SYSTEM "http://testng.org/testng-1.0.dtd">
+<suite name="Suite" parallel="none">
+  <test name="Test" preserve-order="false">
+    <packages>
+      <package name="org.apache.helix"/>
+    </packages>
+  </test>
+</suite>

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/apt/releasenotes/release-0.6.2-incubating.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/releasenotes/release-0.6.2-incubating.apt b/src/site/apt/releasenotes/release-0.6.2-incubating.apt
new file mode 100644
index 0000000..51afc62
--- /dev/null
+++ b/src/site/apt/releasenotes/release-0.6.2-incubating.apt
@@ -0,0 +1,181 @@
+ -----
+ Release Notes for Apache Helix 0.6.2-incubating
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one                      
+~~ or more contributor license agreements.  See the NOTICE file                    
+~~ distributed with this work for additional information                           
+~~ regarding copyright ownership.  The ASF licenses this file                      
+~~ to you under the Apache License, Version 2.0 (the                               
+~~ "License"); you may not use this file except in compliance                      
+~~ with the License.  You may obtain a copy of the License at                      
+~~                                                                                 
+~~   http://www.apache.org/licenses/LICENSE-2.0                                    
+~~                                                                                 
+~~ Unless required by applicable law or agreed to in writing,                      
+~~ software distributed under the License is distributed on an                     
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY                          
+~~ KIND, either express or implied.  See the License for the                       
+~~ specific language governing permissions and limitations                         
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Release Notes for Apache Helix 0.6.2-incubating
+
+  The Apache Helix team would like to announce the release of Apache Helix 0.6.2-incubating
+
+  This is the third release under the Apache umbrella.
+
+  Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix provides the following features:
+
+  * Automatic assignment of resource/partition to nodes
+
+  * Node failure detection and recovery
+
+  * Dynamic addition of Resources
+
+  * Dynamic addition of nodes to the cluster
+
+  * Pluggable distributed state machine to manage the state of a resource via state transitions
+
+  * Automatic load balancing and throttling of transitions
+
+  []
+
+* Changes
+
+** Sub-task
+
+  * [HELIX-28] - ZkHelixManager.handleNewSession() can happen when a liveinstance already exists
+
+  * [HELIX-85] - Remove mock service module
+
+  * [HELIX-106] - Remove all string constants in the code
+
+  * [HELIX-107] - Add support to set custom objects into ZNRecord
+
+  * [HELIX-124] - race condition in ZkHelixManager.handleNewSession()
+
+  * [HELIX-165] - Add dependency for Guava libraries
+
+  * [HELIX-169] - Take care of consecutive handleNewSession() and session expiry during handleNewSession() 
+
+  * [HELIX-170] - HelixManager#isLeader() should compare both instanceName and sessionId 
+
+  * [HELIX-195] - Race condition between FINALIZE callbacks and Zk Callbacks
+
+  * [HELIX-207] - Add javadocs to classes and public methods in the top-level package
+
+  * [HELIX-208] - Add javadocs to classes and public methods in the model package
+
+  * [HELIX-277] - FULL_AUTO rebalancer should not prefer nodes that are just coming up
+
+** Bug
+
+  * [HELIX-7] - Tune test parameters to fix random test failures
+
+  * [HELIX-87] - Bad repository links in website
+
+  * [HELIX-117] - backward incompatibility problem in accessing zkPath vis HelixWebAdmin
+
+  * [HELIX-118] - PropertyStore -> HelixPropertyStore backwards incompatible location
+
+  * [HELIX-119] - HelixManager serializer no longer needs ByteArraySerializer for /PROPERTYSTORE
+
+  * [HELIX-129] - ZKDumper should use byte[] instead of String to read/write file/zk
+
+  * [HELIX-131] - Connection timeout not set while connecting to zookeeper via zkHelixAdmin
+
+  * [HELIX-133] - Cluster-admin command parsing does not work with removeConfig
+
+  * [HELIX-140] - In ClusterSetup.java, the removeConfig is wrong wired to getConfig
+
+  * [HELIX-141] - Autorebalance does not work reliably and fails when replica>1
+
+  * [HELIX-144] - Need to validate StateModelDefinition when adding new StateModelDefinition to Cluster
+
+  * [HELIX-147] - Fix typo in Idealstate property max_partitions_per_instance
+
+  * [HELIX-148] - Current preferred placement for auto rebalace is suboptimal for n > p
+
+  * [HELIX-150] - Auto rebalance might not evenly distribute states across nodes
+
+  * [HELIX-151] - Auto rebalance doesn't assign some replicas when other nodes could make room
+
+  * [HELIX-153] - Auto rebalance tester uses the returned map fields, but production uses only list fields
+
+  * [HELIX-155] - PropertyKey.instances() is wrongly wired to CONFIG type instead of INSTANCES type
+
+  * [HELIX-197] - state model leak
+
+  * [HELIX-199] - ZNRecord should not publish rawPayload unless it exists
+
+  * [HELIX-216] - Allow HelixAdmin addResource to accept the old rebalancing types
+
+  * [HELIX-221] - Can't find default error->dropped transition method using name convention
+
+  * [HELIX-257] - Upgrade Restlet to 2.1.4 - due security flaw
+
+  * [HELIX-258] - Upgrade Apache Camel due to CVE-2013-4330
+
+  * [HELIX-264] - fix zkclient#close() bug
+
+  * [HELIX-279] - Apply gc handling fixes to main ZKHelixManager class
+
+  * [HELIX-280] - Full auto rebalancer should check for resource tag first
+
+  * [HELIX-288] - helix-core uses an old version of guava
+
+  * [HELIX-299] - Some files in 0.6.2 are missing license headers
+
+** Improvement
+
+  * [HELIX-20] - AUTO-REBALANCE helix controller should re-assign disabled partitions on a node to other available nodes
+
+  * [HELIX-70] - Make Helix OSGi ready
+
+  * [HELIX-149] - Allow clients to pass in preferred placement strategies
+
+  * [HELIX-198] - Unify helix code style
+
+  * [HELIX-218] - Add a reviewboard submission script
+
+  * [HELIX-284] - Support participant auto join in YAML cluster setup
+
+** New Feature
+
+  * [HELIX-215] - Allow setting up the cluster with a YAML file
+
+** Task
+
+  * [HELIX-95] - Tracker for 0.6.2 release
+
+  * [HELIX-154] - Auto rebalance algorithm should not depend on state
+
+  * [HELIX-166] - Rename modes to auto, semi-auto, and custom
+
+  * [HELIX-173] - Move rebalancing strategies to separate classes that implement the Rebalancer interface
+
+  * [HELIX-188] - Add admin command line / REST API documentations
+
+  * [HELIX-194] - ZNRecord has too many constructors
+
+  * [HELIX-205] - Have user-defined rebalancers use RebalanceMode.USER_DEFINED
+
+  * [HELIX-210] - Add support to set data with expect version in BaseDataAccessor
+
+  * [HELIX-217] - Remove mock service module
+
+  * [HELIX-273] - Rebalancer interface should remain unchanged in 0.6.2
+
+  * [HELIX-274] - Verify FULL_AUTO tagged node behavior
+
+  * [HELIX-285] - add integration test util's
+
+  []
+
+  Cheers,
+  --
+  The Apache Helix Team

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/apt/releasenotes/release-0.7.0-incubating.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/releasenotes/release-0.7.0-incubating.apt b/src/site/apt/releasenotes/release-0.7.0-incubating.apt
new file mode 100644
index 0000000..7661df0
--- /dev/null
+++ b/src/site/apt/releasenotes/release-0.7.0-incubating.apt
@@ -0,0 +1,174 @@
+ -----
+ Release Notes for Apache Helix 0.7.0-incubating
+ -----
+
+~~ Licensed to the Apache Software Foundation (ASF) under one                      
+~~ or more contributor license agreements.  See the NOTICE file                    
+~~ distributed with this work for additional information                           
+~~ regarding copyright ownership.  The ASF licenses this file                      
+~~ to you under the Apache License, Version 2.0 (the                               
+~~ "License"); you may not use this file except in compliance                      
+~~ with the License.  You may obtain a copy of the License at                      
+~~                                                                                 
+~~   http://www.apache.org/licenses/LICENSE-2.0                                    
+~~                                                                                 
+~~ Unless required by applicable law or agreed to in writing,                      
+~~ software distributed under the License is distributed on an                     
+~~ "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY                          
+~~ KIND, either express or implied.  See the License for the                       
+~~ specific language governing permissions and limitations                         
+~~ under the License.
+
+~~ NOTE: For help with the syntax of this file, see:
+~~ http://maven.apache.org/guides/mini/guide-apt-format.html
+
+Release Notes for Apache Helix 0.7.0-incubating
+
+  The Apache Helix team would like to announce the release of Apache Helix 0.7.0-incubating
+
+  This is the fourth release and second major release under the Apache umbrella.
+
+  Helix is a generic cluster management framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes. Helix provides the following features:
+
+  * Automatic assignment of resource/partition to nodes
+
+  * Node failure detection and recovery
+
+  * Dynamic addition of Resources
+
+  * Dynamic addition of nodes to the cluster
+
+  * Pluggable distributed state machine to manage the state of a resource via state transitions
+
+  * Automatic load balancing and throttling of transitions
+
+  * Configurable, pluggable rebalancing
+
+  []
+
+* Changes
+
+** Sub-task
+
+    * [HELIX-18] - Unify cluster setup and helixadmin
+
+    * [HELIX-79] - consecutive GC may mess up helix session ids
+
+    * [HELIX-83] - Add typed classes to denote helix ids
+
+    * [HELIX-90] - Clean up Api's
+
+    * [HELIX-98] - clean up setting constraint api
+
+    * [HELIX-100] - Improve the helix config api
+
+    * [HELIX-102] - Add new wrapper classes for Participant, Controller, Spectator, Administrator
+
+    * [HELIX-104] - Add support to reuse zkclient
+
+    * [HELIX-123] - ZkHelixManager.isLeader() should check session id in addition to instance name
+
+    * [HELIX-139] - Need to double check the logic to prevent 2 controllers to control the same cluster
+
+    * [HELIX-168] - separate HelixManager implementation for participant, controller, and distributed controller
+
+    * [HELIX-176] - Need a list of tests that must pass to certify a release
+
+    * [HELIX-224] - Move helix examples to separate module
+
+    * [HELIX-233] - Ensure that website and wiki fully capture the updated changes in 0.7.0
+
+    * [HELIX-234] - Create concrete id classes for constructs, replacing strings
+
+    * [HELIX-235] - Create a hierarchical logical model for the cluster
+
+    * [HELIX-236] - Create a hierarchical cluster snapshot to replace ClusterDataCache
+
+    * [HELIX-237] - Create helix-internal config classes for the hierarchical model
+
+    * [HELIX-238] - Create accessors for the logical model
+
+    * [HELIX-239] - List use cases for the logical model
+
+    * [HELIX-240] - Write an example of the key use cases for the logical model
+
+    * [HELIX-241] - Write the controller pipeline with the logical model
+
+    * [HELIX-242] - Re-integrate the scheduler rebalancing into the new controller pipeline
+
+    * [HELIX-243] - Fix failing tests related to helix model overhaul
+
+    * [HELIX-244] - Redesign rebalancers using rebalancer-specific configs
+
+    * [HELIX-246] - Refactor scheduler task config to comply with new rebalancer config and fix related scheduler task tests
+
+    * [HELIX-248] - Resource logical model should be general enough to handle various resource types
+
+    * [HELIX-268] - Atomic API
+
+    * [HELIX-297] - Make 0.7.0 backward compatible for user-defined rebalancing
+
+
+** Bug
+
+    * [HELIX-40] - fix zkclient subscribe path leaking and zk callback-handler leaking in case of session expiry
+
+    * [HELIX-46] - Add REST/cli admin command for message selection constraints
+
+    * [HELIX-47] - when drop resource, remove resource-level config also
+
+    * [HELIX-48] - use resource instead of db in output messages
+
+    * [HELIX-50] - Ensure num replicas and preference list size in idealstate matches
+
+    * [HELIX-59] - controller not cleaning dead external view generated from old sessions
+
+    * [HELIX-136] - Write IdealState back to ZK when computed by custom Rebalancer
+
+    * [HELIX-200] - helix controller send ERROR->DROPPED transition infinitely
+
+    * [HELIX-214] - User-defined rebalancer should never use SEMI_AUTO code paths
+
+    * [HELIX-225] - fix helix-example package build error
+
+    * [HELIX-271] - ZkHelixAdmin#addResource() backward compatible problem
+
+    * [HELIX-292] - ZNRecordStreamingSerializer should not assume id comes first
+
+    * [HELIX-296] - HelixConnection in 0.7.0 does not remove LiveInstance znode
+
+    * [HELIX-300] - Some files in 0.7.0 are missing license headers
+
+    * [HELIX-302] - fix helix version compare bug
+
+** Improvement
+
+    * [HELIX-37] - Cleanup CallbackHandler
+
+    * [HELIX-202] - Ideal state should be a full mapping, not just a set of instance preferences
+
+** Task
+
+    * [HELIX-109] - Review Helix model package
+
+    * [HELIX-174] - Clean up ideal state calculators, move them to the controller rebalancer package
+
+    * [HELIX-212] - Rebalancer interface should have 1 function to compute the entire ideal state
+
+    * [HELIX-232] - Validation of 0.7.0
+
+    * [HELIX-290] - Ensure 0.7.0 can respond correctly to ideal state changes
+
+    * [HELIX-295] - Upgrade or remove xstream dependency
+
+    * [HELIX-301] - Update integration test utils for 0.7.0
+
+** Test
+
+    * [HELIX-286] - add a test for redefine state model definition
+
+  []
+
+  Cheers,
+  --
+  The Apache Helix Team

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/apt/releasing.apt
----------------------------------------------------------------------
diff --git a/src/site/apt/releasing.apt b/src/site/apt/releasing.apt
index 11d0cd9..9771ba6 100644
--- a/src/site/apt/releasing.apt
+++ b/src/site/apt/releasing.apt
@@ -52,12 +52,6 @@ Helix release process
     You should have a GPG agent running in the session you will run the maven release commands(preferred), and confirm it works by running "gpg -ab" (type some text and press Ctrl-D).
     If you do not have a GPG agent running, make sure that you have the "apache-release" profile set in your settings.xml as shown below.
 
-   Run the release
-
-+-------------
-mvn release:prepare release:perform -B
-+-------------
-
   GPG configuration in maven settings xml:
 
 +-------------
@@ -69,17 +63,28 @@ mvn release:prepare release:perform -B
 </profile>
 +-------------
 
- [[4]] go to https://repository.apache.org and close your staged repository. Note the repository url (format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
+   Run the release
+
++-------------
+mvn release:prepare
+mvn release:perform
++-------------
+
+ [[4]] go to https://repository.apache.org and close your staged repository. Log in, click on Staging Repositories, check your repository, and click Close. Note the repository url (format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
+
+ [[5]] Stage the release (stagingRepoUrl format https://repository.apache.org/content/repositories/orgapachehelix-019/org/apache/helix/helix/0.6-incubating/)
 
 +-------------
 svn co https://dist.apache.org/repos/dist/dev/incubator/helix helix-dev-release
 cd helix-dev-release
 sh ./release-script-svn.sh version stagingRepoUrl
-then svn add <new directory created with new version as name>
-then svn ci 
+svn add <new directory created with new version as name>
+gpg -k email@domain.com >> KEYS
+gpg --armor --export email@domain.com >> KEYS
+svn ci 
 +-------------
 
- [[5]] Validating the release
+ [[6]] Validating the release
 
 +-------------
   * Download sources, extract, build and run tests - mvn clean package
@@ -90,18 +95,38 @@ then svn ci
   * Check signatures of all the binaries using gpg <binary>
 +-------------
 
- [[6]] Call for a vote in the dev list and wait for 72 hrs. for the vote results. 3 binding votes are necessary for the release to be finalized. example
+ [[7]] Call for a vote in the dev list and wait for 72 hrs. for the vote results. 3 binding votes are necessary for the release to be finalized. example
   After the vote has passed, move the files from dist dev to dist release: svn mv https://dist.apache.org/repos/dist/dev/incubator/helix/version to https://dist.apache.org/repos/dist/release/incubator/helix/
 
- [[7]] Prepare release note. Add a page in src/site/apt/releasenotes/ and change value of \<currentRelease> in parent pom.
++-------------
+Hi, I'd like to release Apache Helix [VERSION]-incubating.
+
+Release notes: http://helix.incubator.apache.org/releasenotes/release-[VERSION]-incubating.html
+
+Maven staged release repository: https://repository.apache.org/content/repositories/orgapachehelix-[NNN]/
+
+Distribution:
+* binaries: https://dist.apache.org/repos/dist/dev/incubator/helix/[VERSION]-incubating/binaries/
+* sources: https://dist.apache.org/repos/dist/dev/incubator/helix/[VERSION]-incubating/src/
+
+KEYS file available here: https://dist.apache.org/repos/dist/release/incubator/helix/KEYS
+
+Vote open for 72H
+
+[+1]
+[0]
+[-1] 
++-------------
+
+ [[8]] Prepare release note. Add a page in src/site/apt/releasenotes/ and change value of \<currentRelease> in parent pom.
 
 
- [[8]] Send out an announcement of the release to:
+ [[9]] Send out an announcement of the release to:
 
-  * users@helix.incubator.apache.org
+  * user@helix.incubator.apache.org
 
   * dev@helix.incubator.apache.org
 
- [[9]] Celebrate !
+ [[10]] Celebrate!
 
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/Concepts.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Concepts.md b/src/site/markdown/Concepts.md
index fa5d0ba..5bf42ac 100644
--- a/src/site/markdown/Concepts.md
+++ b/src/site/markdown/Concepts.md
@@ -260,7 +260,7 @@ The above steps happen for every change in the system. Once the current state ma
 
 One of the things that makes Helix powerful is that IdealState can be changed dynamically. This means one can listen to cluster events like node failures and dynamically change the ideal state. Helix will then take care of triggering the respective transitions in the system.
 
-Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
+Helix comes with a few algorithms to automatically compute the IdealState based on the constraints. For example, if you have a resource of 3 partitions and 2 replicas, Helix can automatically compute the IdealState based on the nodes that are currently active. See the [tutorial](./site-releases/0.7.0-incubating-site/tutorial_rebalance.html) to find out more about various execution modes of Helix like FULL_AUTO, SEMI_AUTO and CUSTOMIZED. 
 
 
 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/Features.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Features.md b/src/site/markdown/Features.md
deleted file mode 100644
index ba9d0e7..0000000
--- a/src/site/markdown/Features.md
+++ /dev/null
@@ -1,313 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Features</title>
-</head>
-
-Features
-----------------------------
-
-
-### CONFIGURING IDEALSTATE
-
-
-Read concepts page for definition of Idealstate.
-
-The placement of partitions in a DDS is very critical for reliability and scalability of the system. 
-For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can guarantee this.
-Helix by default comes with a variant of consistent hashing based of the RUSH algorithm. 
-
-This means given a number of partitions, replicas and number of nodes Helix does the automatic assignment of partition to nodes such that
-
-* Each node has the same number of partitions and replicas of the same partition do not stay on the same node.
-* When a node fails, the partitions will be equally distributed among the remaining nodes
-* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above two criteria.
-
-
-Helix provides multiple ways to control the placement and state of a replica. 
-
-```
-
-            |AUTO REBALANCE|   AUTO     |   CUSTOM  |       
-            -----------------------------------------
-   LOCATION | HELIX        |  APP       |  APP      |
-            -----------------------------------------
-      STATE | HELIX        |  HELIX     |  APP      |
-            -----------------------------------------
-```
-
-#### HELIX EXECUTION MODE 
-
-
-Idealstate is defined as the state of the DDS when all nodes are up and running and healthy. 
-Helix uses this as the target state of the system and computes the appropriate transitions needed in the system to bring it to a stable state. 
-
-Helix supports 3 different execution modes which allows application to explicitly control the placement and state of the replica.
-
-##### AUTO_REBALANCE
-
-When the idealstate mode is set to AUTO_REBALANCE, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. Example
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "IDEAL_STATE_MODE" : "AUTO_REBALANCE",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  }
-  "listFields" : {
-    "MyResource_0" : [],
-    "MyResource_1" : [],
-    "MyResource_2" : []
-  },
-  "mapFields" : {
-  }
-}
-```
-
-If there are 3 nodes in the cluster, then Helix will internally compute the ideal state as 
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "MyResource_0" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    },
-    "MyResource_1" : {
-      "N2" : "MASTER",
-      "N3" : "SLAVE",
-    },
-    "MyResource_2" : {
-      "N3" : "MASTER",
-      "N1" : "SLAVE",
-    }
-  }
-}
-```
-
-Another typical example is evenly distributing a group of tasks among the currently alive processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
-When one node fails Helix redistributes its 15 tasks to the remaining 3 nodes. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node. 
-
-#### AUTO
-
-When the idealstate mode is set to AUTO, Helix only controls STATE of the replicas where as the location of the partition is controlled by application. Example: The below idealstate indicates thats 'MyResource_0' must be only on node1 and node2.  But gives the control of assigning the STATE to Helix.
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "IDEAL_STATE_MODE" : "AUTO",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  }
-  "listFields" : {
-    "MyResource_0" : [node1, node2],
-    "MyResource_1" : [node2, node3],
-    "MyResource_2" : [node3, node1]
-  },
-  "mapFields" : {
-  }
-}
-```
-In this mode when node1 fails, unlike in AUTO-REBALANCE mode the partition is not moved from node1 to others nodes in the cluster. Instead, Helix will decide to change the state of MyResource_0 in N2 based on the system constraints. For example, if a system constraint specified that there should be 1 Master and if the Master failed, then node2 will be made the new master. 
-
-#### CUSTOM
-
-Helix offers a third mode called CUSTOM, in which application can completely control the placement and state of each replica. Applications will have to implement an interface that Helix will invoke when the cluster state changes. 
-Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that Idealstate and Currentstate converges.
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-      "IDEAL_STATE_MODE" : "CUSTOM",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "MyResource_0" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    },
-    "MyResource_1" : {
-      "N2" : "MASTER",
-      "N3" : "SLAVE",
-    },
-    "MyResource_2" : {
-      "N3" : "MASTER",
-      "N1" : "SLAVE",
-    }
-  }
-}
-```
-
-For example, the current state of the system might be 'MyResource_0' -> {N1:MASTER,N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel since it might result in a transient state where both N1 and N2 are masters.
-Helix will first issue MASTER-->SLAVE to N1 and after its completed it will issue SLAVE-->MASTER to N2. 
- 
-
-### State Machine Configuration
-
-Helix comes with 3 default state models that are most commonly used. Its possible to have multiple state models in a cluster. 
-Every resource that is added should have a reference to the state model. 
-
-* MASTER-SLAVE: Has 3 states OFFLINE,SLAVE,MASTER. Max masters is 1. Slaves will be based on the replication factor. Replication factor can be specified while adding the resource
-* ONLINE-OFFLINE: Has 2 states OFFLINE and ONLINE. Very simple state model and most applications start off with this state model.
-* LEADER-STANDBY:1 Leader and many stand bys. In general the standby's are idle.
-
-Apart from providing the state machine configuration, one can specify the constraints of states and transitions.
-
-For example one can say
-Master:1. Max number of replicas in Master state at any time is 1.
-OFFLINE-SLAVE:5 Max number of Offline-Slave transitions that can happen concurrently in the system
-
-STATE PRIORITY
-Helix uses greedy approach to satisfy the state constraints. For example if the state machine configuration says it needs 1 master and 2 slaves but only 1 node is active, Helix must promote it to master. This behavior is achieved by providing the state priority list as MASTER,SLAVE.
-
-STATE TRANSITION PRIORITY
-Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. 
-One can control this by overriding the priority order.
- 
-### Config management
-
-Helix allows applications to store application specific properties. The configuration can have different scopes.
-
-* Cluster
-* Node specific
-* Resource specific
-* Partition specific
-
-Helix also provides notifications when any configs are changed. This allows applications to support dynamic configuration changes.
-
-See HelixManager.getConfigAccessor for more info
-
-### Intra cluster messaging api
-
-This is an interesting feature which is quite useful in practice. Often times, nodes in DDS requires a mechanism to interact with each other. One such requirement is a process of bootstrapping a replica.
-
-Consider a search system use case where the index replica starts up and it does not have an index. One of the commonly used solutions is to get the index from a common location or to copy the index from another replica.
-Helix provides a messaging api, that can be used to talk to other nodes in the system. The value added that Helix provides here is, message recipient can be specified in terms of resource, 
-partition, state and Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of P1. 
-Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond Helix provides the bootstrapping replica with all the responses.
-
-This is a very generic api and can also be used to schedule various periodic tasks in the cluster like data backups etc. 
-System Admins can also perform adhoc tasks like on demand backup or execute a system command(like rm -rf ;-)) across all nodes.
-
-```
-      ClusterMessagingService messagingService = manager.getMessagingService();
-      //CONSTRUCT THE MESSAGE
-      Message requestBackupUriRequest = new Message(
-          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
-      requestBackupUriRequest
-          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
-      requestBackupUriRequest.setMsgState(MessageState.NEW);
-      //SET THE RECIPIENT CRITERIA, All nodes that satisfy the criteria will receive the message
-      Criteria recipientCriteria = new Criteria();
-      recipientCriteria.setInstanceName("%");
-      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
-      recipientCriteria.setResource("MyDB");
-      recipientCriteria.setPartition("");
-      //Should be processed only the process that is active at the time of sending the message. 
-      //This means if the recipient is restarted after message is sent, it will not be processed.
-      recipientCriteria.setSessionSpecific(true);
-      // wait for 30 seconds
-      int timeout = 30000;
-      //The handler that will be invoked when any recipient responds to the message.
-      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
-      //This will return only after all recipients respond or after timeout.
-      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
-          requestBackupUriRequest, responseHandler, timeout);
-```
-
-See HelixManager.getMessagingService for more info.
-
-
-### Application specific property storage
-
-There are several usecases where applications needs support for distributed data structures. Helix uses Zookeeper to store the application data and hence provides notifications when the data changes. 
-One value add Helix provides is the ability to specify cache the data and also write through cache. This is more efficient than reading from ZK every time.
-
-See HelixManager.getHelixPropertyStore
-
-### Throttling
-
-Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight but some might involve moving data around which is quite expensive.
-Helix allows applications to set threshold on transitions. The threshold can be set at the multiple scopes.
-
-* MessageType e.g STATE_TRANSITION
-* TransitionType e.g SLAVE-MASTER
-* Resource e.g database
-* Node i.e per node max transitions in parallel.
-
-See HelixManager.getHelixAdmin.addMessageConstraint() 
-
-### Health monitoring and alerting
-
-This in currently in development mode, not yet productionized.
-
-Helix provides ability for each node in the system to report health metrics on a periodic basis. 
-Helix supports multiple ways to aggregate these metrics like simple SUM, AVG, EXPONENTIAL DECAY, WINDOW. Helix will only persist the aggregated value.
-Applications can define threshold on the aggregate values according to the SLA's and when the SLA is violated Helix will fire an alert. 
-Currently Helix only fires an alert but eventually we plan to use this metrics to either mark the node dead or load balance the partitions. 
-This feature will be valuable in for distributed systems that support multi-tenancy and have huge variation in work load patterns. Another place this can be used is to detect skewed partitions and rebalance the cluster.
-
-This feature is not yet stable and do not recommend to be used in production.
-
-
-### Controller deployment modes
-
-Read Architecture wiki for more details on the Role of a controller. In simple words, it basically controls the participants in the cluster by issuing transitions.
-
-Helix provides multiple options to deploy the controller.
-
-#### STANDALONE
-
-Controller can be started as a separate process to manage a cluster. This is the recommended approach. How ever since one controller can be a single point of failure, multiple controller processes are required for reliability.
-Even if multiple controllers are running only one will be actively managing the cluster at any time and is decided by a leader election process. If the leader fails, another leader will resume managing the cluster.
-
-Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
-
-#### EMBEDDED
-
-If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participant. 
-
-#### CONTROLLER AS A SERVICE
-
-One of the cool feature we added in helix was use a set of controllers to manage a large number of clusters. 
-For example if you have X clusters to be managed, instead of deploying X*3(3 controllers for fault tolerance) controllers for each cluster, one can deploy only 3 controllers. Each controller can manage X/3 clusters. 
-If any controller fails the remaining two will manage X/2 clusters. At LinkedIn, we always deploy controllers in this mode. 
-
-
-
-
-
-
-
- 

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/Publications.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Publications.md b/src/site/markdown/Publications.md
new file mode 100644
index 0000000..e2f36b1
--- /dev/null
+++ b/src/site/markdown/Publications.md
@@ -0,0 +1,37 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Publications</title>
+</head>
+
+
+Publications
+-------------
+
+* Untangling cluster management using Helix at [SOCC Oct 2012](http://www.socc2012.org/home/program)
+    - [paper](https://915bbc94-a-62cb3a1a-s-sites.googlegroups.com/site/acm2012socc/helix_onecol.pdf)
+    - [presentation](http://www.slideshare.net/KishoreGopalakrishna/helix-socc-v10final)
+* Building distributed systems using Helix
+    - [presentation at RelateIQ](http://www.slideshare.net/KishoreGopalakrishna/helix-talk)
+    - [presentation at ApacheCon](http://www.slideshare.net/KishoreGopalakrishna/apache-con-buildingddsusinghelix)
+    - [presentation at VMWare](http://www.slideshare.net/KishoreGopalakrishna/apache-helix-presentation-at-vmware)
+* Data driven testing:
+    - [short talk at LSPE meetup](http://www.slideshare.net/KishoreGopalakrishna/data-driven-testing)
+    - [paper DBTest 2013 acm SIGMOD:will be published on Jun 24, 2013](http://dbtest2013.soe.ucsc.edu/Program.htm)

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/Quickstart.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Quickstart.md b/src/site/markdown/Quickstart.md
deleted file mode 100644
index 4a5e83d..0000000
--- a/src/site/markdown/Quickstart.md
+++ /dev/null
@@ -1,626 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Quickstart</title>
-</head>
-
-Get Helix
----------
-
-First, let\'s get Helix, either build it, or download.
-
-### Build
-
-    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-    cd incubator-helix
-    git checkout tags/helix-0.6.1-incubating
-    ./build
-    cd helix-core/target/helix-core-pkg/bin //This folder contains all the scripts used in following sections
-    chmod +x *
-
-### Download
-
-Download the 0.6.1-incubating release package [here](./download.html) 
-
-Overview
---------
-
-In this Quickstart, we\'ll set up a master-slave replicated, partitioned system.  Then we\'ll demonstrate how to add a node, rebalance the partitions, and show how Helix manages failover.
-
-
-Let\'s Do It
-------------
-
-Helix provides command line interfaces to set up the cluster and view the cluster state. The best way to understand how Helix views a cluster is to build a cluster.
-
-#### First, get to the tools directory
-
-If you built the code
-
-```
-cd incubator-helix/helix-core/target/helix-core-pkg/bin
-```
-
-If you downloaded the release package, extract it.
-
-
-Short Version
--------------
-You can observe the components working together in this demo, which does the following:
-
-* Create a cluster
-* Add 2 nodes (participants) to the cluster
-* Set up a resource with 6 partitions and 2 replicas: 1 Master, and 1 Slave per partition
-* Show the cluster state after Helix balances the partitions
-* Add a third node
-* Show the cluster state.  Note that the third node has taken mastership of 2 partitions.
-* Kill the third node (Helix takes care of failover)
-* Show the cluster state.  Note that the two surviving nodes take over mastership of the partitions from the failed node
-
-##### Run the demo
-
-```
-cd incubator-helix/helix-core/target/helix-core-pkg/bin
-./quickstart.sh
-```
-
-##### 2 nodes are set up and the partitions rebalanced
-
-The cluster state is as follows:
-
-```
-CLUSTER STATE: After starting 2 nodes
-	                     localhost_12000	localhost_12001	
-	       MyResource_0	M			S		
-	       MyResource_1	S			M		
-	       MyResource_2	M			S		
-	       MyResource_3	M			S		
-	       MyResource_4	S			M  
-	       MyResource_5	S			M  
-```
-
-Note there is one master and one slave per partition.
-
-##### A third node is added and the cluster rebalanced
-
-The cluster state changes to:
-
-```
-CLUSTER STATE: After adding a third node
-                 	       localhost_12000	    localhost_12001	localhost_12002	
-	       MyResource_0	    S			  M		      S		
-	       MyResource_1	    S			  S		      M	 
-	       MyResource_2	    M			  S	              S  
-	       MyResource_3	    S			  S                   M  
-	       MyResource_4	    M			  S	              S  
-	       MyResource_5	    S			  M                   S  
-```
-
-Note there is one master and _two_ slaves per partition.  This is expected because there are three nodes.
-
-##### Finally, a node is killed to simulate a failure
-
-Helix makes sure each partition has a master.  The cluster state changes to:
-
-```
-CLUSTER STATE: After the 3rd node stops/crashes
-                	       localhost_12000	  localhost_12001	localhost_12002	
-	       MyResource_0	    S			M		      -		
-	       MyResource_1	    S			M		      -	 
-	       MyResource_2	    M			S	              -  
-	       MyResource_3	    M			S                     -  
-	       MyResource_4	    M			S	              -  
-	       MyResource_5	    S			M                     -  
-```
-
-
-Long Version
-------------
-Now you can run the same steps by hand.  In the detailed version, we\'ll do the following:
-
-* Define a cluster
-* Add two nodes to the cluster
-* Add a 6-partition resource with 1 master and 2 slave replicas per partition
-* Verify that the cluster is healthy and inspect the Helix view
-* Expand the cluster: add a few nodes and rebalance the partitions
-* Failover: stop a node and verify the mastership transfer
-
-### Install and Start Zookeeper
-
-Zookeeper can be started in standalone mode or replicated mode.
-
-More info is available at 
-
-* http://zookeeper.apache.org/doc/r3.3.3/zookeeperStarted.html
-* http://zookeeper.apache.org/doc/trunk/zookeeperAdmin.html#sc_zkMulitServerSetup
-
-In this example, let\'s start zookeeper in local mode.
-
-##### start zookeeper locally on port 2199
-
-    ./start-standalone-zookeeper.sh 2199 &
-
-### Define the Cluster
-
-The helix-admin tool is used for cluster administration tasks. In the Quickstart, we\'ll use the command line interface. Helix supports a REST interface as well.
-
-zookeeper_address is of the format host:port e.g localhost:2199 for standalone or host1:port,host2:port for multi-node.
-
-Next, we\'ll set up a cluster MYCLUSTER cluster with these attributes:
-
-* 3 instances running on localhost at ports 12913,12914,12915 
-* One database named myDB with 6 partitions 
-* Each partition will have 3 replicas with 1 master, 2 slaves
-* zookeeper running locally at localhost:2199
-
-##### Create the cluster MYCLUSTER
-    ## helix-admin.sh --zkSvr <zk_address> --addCluster <clustername> 
-    ./helix-admin.sh --zkSvr localhost:2199 --addCluster MYCLUSTER 
-
-##### Add nodes to the cluster
-
-In this case we\'ll add three nodes: localhost:12913, localhost:12914, localhost:12915
-
-    ## helix-admin.sh --zkSvr <zk_address>  --addNode <clustername> <host:port>
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12913
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12914
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12915
-
-#### Define the resource and partitioning
-
-In this example, the resource is a database, partitioned 6 ways.  (In a production system, it\'s common to over-partition for better load balancing.  Helix has been used in production to manage hundreds of databases each with 10s or 100s of partitions running on 10s of physical nodes.)
-
-##### Create a database with 6 partitions using the MasterSlave state model. 
-
-Helix ensures there will be exactly one master for each partition.
-
-    ## helix-admin.sh --zkSvr <zk_address> --addResource <clustername> <resourceName> <numPartitions> <StateModelName>
-    ./helix-admin.sh --zkSvr localhost:2199 --addResource MYCLUSTER myDB 6 MasterSlave
-   
-##### Now we can let Helix assign partitions to nodes. 
-
-This command will distribute the partitions amongst all the nodes in the cluster. In this example, each partition has 3 replicas.
-
-    ## helix-admin.sh --zkSvr <zk_address> --rebalance <clustername> <resourceName> <replication factor>
-    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
-
-Now the cluster is defined in Zookeeper.  The nodes (localhost:12913, localhost:12914, localhost:12915) and resource (myDB, with 6 partitions using the MasterSlave model).  And the _ideal state_ has been calculated, assuming a replication factor of 3.
-
-##### Start the Helix Controller
-
-Now that the cluster is defined in Zookeeper, the Helix controller can manage the cluster.
-
-    ## Start the cluster manager, which will manage MYCLUSTER
-    ./run-helix-controller.sh --zkSvr localhost:2199 --cluster MYCLUSTER 2>&1 > /tmp/controller.log &
-
-##### Start up the cluster to be managed
-
-We\'ve started up Zookeeper, defined the cluster, the resources, the partitioning, and started up the Helix controller.  Next, we\'ll start up the nodes of the system to be managed.  Each node is a Participant, which is an instance of the system component to be managed.  Helix assigns work to Participants, keeps track of their roles and health, and takes action when a node fails.
-
-    # start up each instance.  These are mock implementations that are actively managed by Helix
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12913 --stateModelType MasterSlave 2>&1 > /tmp/participant_12913.log 
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12914 --stateModelType MasterSlave 2>&1 > /tmp/participant_12914.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12915 --stateModelType MasterSlave 2>&1 > /tmp/participant_12915.log
-
-
-#### Inspect the Cluster
-
-Now, let\'s see the Helix view of our cluster.  We\'ll work our way down as follows:
-
-```
-Clusters -> MYCLUSTER -> instances -> instance detail
-                      -> resources -> resource detail
-                      -> partitions
-```
-
-A single Helix controller can manage multiple clusters, though so far, we\'ve only defined one cluster.  Let\'s see:
-
-```
-## List existing clusters
-./helix-admin.sh --zkSvr localhost:2199 --listClusters        
-
-Existing clusters:
-MYCLUSTER
-```
-                                       
-Now, let\'s see the Helix view of MYCLUSTER
-
-```
-## helix-admin.sh --zkSvr <zk_address> --listClusterInfo <clusterName> 
-./helix-admin.sh --zkSvr localhost:2199 --listClusterInfo MYCLUSTER
-
-Existing resources in cluster MYCLUSTER:
-myDB
-Instances in cluster MYCLUSTER:
-localhost_12915
-localhost_12914
-localhost_12913
-```
-
-
-Let\'s look at the details of an instance
-
-```
-## ./helix-admin.sh --zkSvr <zk_address> --listInstanceInfo <clusterName> <InstanceName>    
-./helix-admin.sh --zkSvr localhost:2199 --listInstanceInfo MYCLUSTER localhost_12913
-
-InstanceConfig: {
-  "id" : "localhost_12913",
-  "mapFields" : {
-  },
-  "listFields" : {
-  },
-  "simpleFields" : {
-    "HELIX_ENABLED" : "true",
-    "HELIX_HOST" : "localhost",
-    "HELIX_PORT" : "12913"
-  }
-}
-```
-
-    
-##### Query info of a resource
-
-```
-## helix-admin.sh --zkSvr <zk_address> --listResourceInfo <clusterName> <resourceName>
-./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
-
-IdealState for myDB:
-{
-  "id" : "myDB",
-  "mapFields" : {
-    "myDB_0" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    },
-    "myDB_1" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "MASTER"
-    },
-    "myDB_2" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "SLAVE"
-    },
-    "myDB_3" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "MASTER"
-    },
-    "myDB_4" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "SLAVE"
-    },
-    "myDB_5" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    }
-  },
-  "listFields" : {
-    "myDB_0" : [ "localhost_12914", "localhost_12913", "localhost_12915" ],
-    "myDB_1" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
-    "myDB_2" : [ "localhost_12913", "localhost_12915", "localhost_12914" ],
-    "myDB_3" : [ "localhost_12915", "localhost_12913", "localhost_12914" ],
-    "myDB_4" : [ "localhost_12913", "localhost_12914", "localhost_12915" ],
-    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
-  },
-  "simpleFields" : {
-    "REBALANCE_MODE" : "SEMI_AUTO",
-    "NUM_PARTITIONS" : "6",
-    "REPLICAS" : "3",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
-  }
-}
-
-ExternalView for myDB:
-{
-  "id" : "myDB",
-  "mapFields" : {
-    "myDB_0" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    },
-    "myDB_1" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "MASTER"
-    },
-    "myDB_2" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "SLAVE"
-    },
-    "myDB_3" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "MASTER"
-    },
-    "myDB_4" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12914" : "SLAVE",
-      "localhost_12915" : "SLAVE"
-    },
-    "myDB_5" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    }
-  },
-  "listFields" : {
-  },
-  "simpleFields" : {
-    "BUCKET_SIZE" : "0"
-  }
-}
-```
-
-Now, let\'s look at one of the partitions:
-
-    ## helix-admin.sh --zkSvr <zk_address> --listPartitionInfo <clusterName> <resource> <partition> 
-    ./helix-admin.sh --zkSvr localhost:2199 --listPartitionInfo MYCLUSTER myDB myDB_0
-
-#### Expand the Cluster
-
-Next, we\'ll show how Helix does the work that you\'d otherwise have to build into your system.  When you add capacity to your cluster, you want the work to be evenly distributed.  In this example, we started with 3 nodes, with 6 partitions.  The partitions were evenly balanced, 2 masters and 4 slaves per node. Let\'s add 3 more nodes: localhost:12916, localhost:12917, localhost:12918
-
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12916
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12917
-    ./helix-admin.sh --zkSvr localhost:2199  --addNode MYCLUSTER localhost:12918
-
-And start up these instances:
-
-    # start up each instance.  These are mock implementations that are actively managed by Helix
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12916 --stateModelType MasterSlave 2>&1 > /tmp/participant_12916.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12917 --stateModelType MasterSlave 2>&1 > /tmp/participant_12917.log
-    ./start-helix-participant.sh --zkSvr localhost:2199 --cluster MYCLUSTER --host localhost --port 12918 --stateModelType MasterSlave 2>&1 > /tmp/participant_12918.log
-
-
-And now, let Helix do the work for you.  To shift the work, simply rebalance.  After the rebalance, each node will have one master and two slaves.
-
-    ./helix-admin.sh --zkSvr localhost:2199 --rebalance MYCLUSTER myDB 3
-
-#### View the cluster
-
-OK, let\'s see how it looks:
-
-
-```
-./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
-
-IdealState for myDB:
-{
-  "id" : "myDB",
-  "mapFields" : {
-    "myDB_0" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12917" : "MASTER"
-    },
-    "myDB_1" : {
-      "localhost_12916" : "SLAVE",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "MASTER"
-    },
-    "myDB_2" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_3" : {
-      "localhost_12915" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_4" : {
-      "localhost_12916" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_5" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    }
-  },
-  "listFields" : {
-    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
-    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
-    "myDB_2" : [ "localhost_12913", "localhost_12917", "localhost_12918" ],
-    "myDB_3" : [ "localhost_12915", "localhost_12917", "localhost_12918" ],
-    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
-    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
-  },
-  "simpleFields" : {
-    "REBALANCE_MODE" : "SEMI_AUTO",
-    "NUM_PARTITIONS" : "6",
-    "REPLICAS" : "3",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
-  }
-}
-
-ExternalView for myDB:
-{
-  "id" : "myDB",
-  "mapFields" : {
-    "myDB_0" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12917" : "MASTER"
-    },
-    "myDB_1" : {
-      "localhost_12916" : "SLAVE",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "MASTER"
-    },
-    "myDB_2" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_3" : {
-      "localhost_12915" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_4" : {
-      "localhost_12916" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_5" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    }
-  },
-  "listFields" : {
-  },
-  "simpleFields" : {
-    "BUCKET_SIZE" : "0"
-  }
-}
-```
-
-Mission accomplished.  The partitions are nicely balanced.
-
-#### How about Failover?
-
-Building a fault tolerant system isn\'t trivial, but with Helix, it\'s easy.  Helix detects a failed instance, and triggers mastership transfer automatically.
-
-First, let's fail an instance.  In this example, we\'ll kill localhost:12918 to simulate a failure.
-
-We lost localhost:12918, so myDB_1 lost its MASTER.  Helix can fix that, it will transfer mastership to a healthy node that is currently a SLAVE, say localhost:12197.  Helix balances the load as best as it can, given there are 6 partitions on 5 nodes.  Let\'s see:
-
-
-```
-./helix-admin.sh --zkSvr localhost:2199 --listResourceInfo MYCLUSTER myDB
-
-IdealState for myDB:
-{
-  "id" : "myDB",
-  "mapFields" : {
-    "myDB_0" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12917" : "MASTER"
-    },
-    "myDB_1" : {
-      "localhost_12916" : "SLAVE",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "MASTER"
-    },
-    "myDB_2" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_3" : {
-      "localhost_12915" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_4" : {
-      "localhost_12916" : "MASTER",
-      "localhost_12917" : "SLAVE",
-      "localhost_12918" : "SLAVE"
-    },
-    "myDB_5" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    }
-  },
-  "listFields" : {
-    "myDB_0" : [ "localhost_12917", "localhost_12913", "localhost_12914" ],
-    "myDB_1" : [ "localhost_12918", "localhost_12917", "localhost_12916" ],
-    "myDB_2" : [ "localhost_12913", "localhost_12918", "localhost_12917" ],
-    "myDB_3" : [ "localhost_12915", "localhost_12918", "localhost_12917" ],
-    "myDB_4" : [ "localhost_12916", "localhost_12917", "localhost_12918" ],
-    "myDB_5" : [ "localhost_12914", "localhost_12915", "localhost_12913" ]
-  },
-  "simpleFields" : {
-    "REBALANCE_MODE" : "SEMI_AUTO",
-    "NUM_PARTITIONS" : "6",
-    "REPLICAS" : "3",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-    "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
-  }
-}
-
-ExternalView for myDB:
-{
-  "id" : "myDB",
-  "mapFields" : {
-    "myDB_0" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "SLAVE",
-      "localhost_12917" : "MASTER"
-    },
-    "myDB_1" : {
-      "localhost_12916" : "SLAVE",
-      "localhost_12917" : "MASTER"
-    },
-    "myDB_2" : {
-      "localhost_12913" : "MASTER",
-      "localhost_12917" : "SLAVE"
-    },
-    "myDB_3" : {
-      "localhost_12915" : "MASTER",
-      "localhost_12917" : "SLAVE"
-    },
-    "myDB_4" : {
-      "localhost_12916" : "MASTER",
-      "localhost_12917" : "SLAVE"
-    },
-    "myDB_5" : {
-      "localhost_12913" : "SLAVE",
-      "localhost_12914" : "MASTER",
-      "localhost_12915" : "SLAVE"
-    }
-  },
-  "listFields" : {
-  },
-  "simpleFields" : {
-    "BUCKET_SIZE" : "0"
-  }
-}
-```
-
-As we\'ve seen in this Quickstart, Helix takes care of partitioning, load balancing, elasticity, failure detection and recovery.
-
-##### ZooInspector
-
-You can view all of the underlying data by going direct to zookeeper.  Use ZooInspector that comes with zookeeper to browse the data. This is a java applet (make sure you have X windows)
-
-To start zooinspector run the following command from <zk_install_directory>/contrib/ZooInspector
-      
-    java -cp zookeeper-3.3.3-ZooInspector.jar:lib/jtoaster-1.0.4.jar:../../lib/log4j-1.2.15.jar:../../zookeeper-3.3.3.jar org.apache.zookeeper.inspector.ZooInspector
-
-#### Next
-
-Now that you understand the idea of Helix, read the [tutorial](./tutorial.html) to learn how to choose the right state model and constraints for your system, and how to implement it.  In many cases, the built-in features meet your requirements.  And best of all, Helix is a customizable framework, so you can plug in your own behavior, while retaining the automation provided by Helix.
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/Tutorial.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/Tutorial.md b/src/site/markdown/Tutorial.md
deleted file mode 100644
index 61221b7..0000000
--- a/src/site/markdown/Tutorial.md
+++ /dev/null
@@ -1,205 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial</title>
-</head>
-
-# Helix Tutorial
-
-In this tutorial, we will cover the roles of a Helix-managed cluster, and show the code you need to write to integrate with it.  In many cases, there is a simple default behavior that is often appropriate, but you can also customize the behavior.
-
-Convention: we first cover the _basic_ approach, which is the easiest to implement.  Then, we'll describe _advanced_ options, which give you more control over the system behavior, but require you to write more code.
-
-
-### Prerequisites
-
-1. Read [Concepts/Terminology](./Concepts.html) and [Architecture](./Architecture.html)
-2. Read the [Quickstart guide](./Quickstart.html) to learn how Helix models and manages a cluster
-3. Install Helix source.  See: [Quickstart](./Quickstart.html) for the steps.
-
-### Tutorial Outline
-
-1. [Participant](./tutorial_participant.html)
-2. [Spectator](./tutorial_spectator.html)
-3. [Controller](./tutorial_controller.html)
-4. [Rebalancing Algorithms](./tutorial_rebalance.html)
-5. [User-Defined Rebalancing](./tutorial_user_def_rebalancer.html)
-6. [State Machines](./tutorial_state.html)
-7. [Messaging](./tutorial_messaging.html)
-8. [Customized health check](./tutorial_health.html)
-9. [Throttling](./tutorial_throttling.html)
-10. [Application Property Store](./tutorial_propstore.html)
-11. [Admin Interface](./tutorial_admin.html)
-12. [YAML Cluster Setup](./tutorial_yaml.html)
-
-### Preliminaries
-
-First, we need to set up the system.  Let\'s walk through the steps in building a distributed system using Helix.
-
-### Start Zookeeper
-
-This starts a zookeeper in standalone mode. For production deployment, see [Apache Zookeeper](http://zookeeper.apache.org) for instructions.
-
-```
-    ./start-standalone-zookeeper.sh 2199 &
-```
-
-### Create a cluster
-
-Creating a cluster will define the cluster in appropriate znodes on zookeeper.   
-
-Using the java API:
-
-```
-    // Create setup tool instance
-    // Note: ZK_ADDRESS is the host:port of Zookeeper
-    String ZK_ADDRESS = "localhost:2199";
-    admin = new ZKHelixAdmin(ZK_ADDRESS);
-
-    String CLUSTER_NAME = "helix-demo";
-    //Create cluster namespace in zookeeper
-    admin.addCluster(CLUSTER_NAME);
-```
-
-OR
-
-Using the command-line interface:
-
-```
-    ./helix-admin.sh --zkSvr localhost:2199 --addCluster helix-demo 
-```
-
-
-### Configure the nodes of the cluster
-
-First we\'ll add new nodes to the cluster, then configure the nodes in the cluster. Each node in the cluster must be uniquely identifiable. 
-The most commonly used convention is hostname:port.
-
-```
-    String CLUSTER_NAME = "helix-demo";
-    int NUM_NODES = 2;
-    String hosts[] = new String[]{"localhost","localhost"};
-    String ports[] = new String[]{7000,7001};
-    for (int i = 0; i < NUM_NODES; i++)
-    {
-      
-      InstanceConfig instanceConfig = new InstanceConfig(hosts[i]+ "_" + ports[i]);
-      instanceConfig.setHostName(hosts[i]);
-      instanceConfig.setPort(ports[i]);
-      instanceConfig.setInstanceEnabled(true);
-
-      //Add additional system specific configuration if needed. These can be accessed during the node start up.
-      instanceConfig.getRecord().setSimpleField("key", "value");
-      admin.addInstance(CLUSTER_NAME, instanceConfig);
-      
-    }
-```
-
-### Configure the resource
-
-A _resource_ represents the actual task performed by the nodes. It can be a database, index, topic, queue or any other processing entity.
-A _resource_ can be divided into many sub-parts known as _partitions_.
-
-
-#### Define the _state model_ and _constraints_
-
-For scalability and fault tolerance, each partition can have one or more replicas. 
-The _state model_ allows one to declare the system behavior by first enumerating the various STATES, and the TRANSITIONS between them.
-A simple model is ONLINE-OFFLINE where ONLINE means the task is active and OFFLINE means it\'s not active.
-You can also specify how many replicas must be in each state, these are known as _constraints_.
-For example, in a search system, one might need more than one node serving the same index to handle the load.
-
-The allowed states: 
-
-* MASTER
-* SLAVE
-* OFFLINE
-
-The allowed transitions: 
-
-* OFFLINE to SLAVE
-* SLAVE to OFFLINE
-* SLAVE to MASTER
-* MASTER to SLAVE
-
-The constraints:
-
-* no more than 1 MASTER per partition
-* the rest of the replicas should be slaves
-
-The following snippet shows how to declare the _state model_ and _constraints_ for the MASTER-SLAVE model.
-
-```
-
-    StateModelDefinition.Builder builder = new StateModelDefinition.Builder(STATE_MODEL_NAME);
-
-    // Add states and their rank to indicate priority. A lower rank corresponds to a higher priority
-    builder.addState(MASTER, 1);
-    builder.addState(SLAVE, 2);
-    builder.addState(OFFLINE);
-
-    // Set the initial state when the node starts
-    builder.initialState(OFFLINE);
-
-    // Add transitions between the states.
-    builder.addTransition(OFFLINE, SLAVE);
-    builder.addTransition(SLAVE, OFFLINE);
-    builder.addTransition(SLAVE, MASTER);
-    builder.addTransition(MASTER, SLAVE);
-
-    // set constraints on states.
-
-    // static constraint: upper bound of 1 MASTER
-    builder.upperBound(MASTER, 1);
-
-    // dynamic constraint: R means it should be derived based on the replication factor for the cluster
-    // this allows a different replication factor for each resource without 
-    // having to define a new state model
-    //
-    builder.dynamicUpperBound(SLAVE, "R");
-
-    StateModelDefinition statemodelDefinition = builder.build();
-    admin.addStateModelDef(CLUSTER_NAME, STATE_MODEL_NAME, myStateModel);
-```
-
-#### Assigning partitions to nodes
-
-The final goal of Helix is to ensure that the constraints on the state model are satisfied. 
-Helix does this by assigning a STATE to a partition (such as MASTER, SLAVE), and placing it on a particular node.
-
-There are 3 assignment modes Helix can operate on
-
-* FULL_AUTO: Helix decides the placement and state of a partition.
-* SEMI_AUTO: Application decides the placement but Helix decides the state of a partition.
-* CUSTOMIZED: Application controls the placement and state of a partition.
-
-For more info on the assignment modes, see [Rebalancing Algorithms](./tutorial_rebalance.html) section of the tutorial.
-
-```
-    String RESOURCE_NAME = "MyDB";
-    int NUM_PARTITIONS = 6;
-    STATE_MODEL_NAME = "MasterSlave";
-    String MODE = "SEMI_AUTO";
-    int NUM_REPLICAS = 2;
-
-    admin.addResource(CLUSTER_NAME, RESOURCE_NAME, NUM_PARTITIONS, STATE_MODEL_NAME, MODE);
-    admin.rebalance(CLUSTER_NAME, RESOURCE_NAME, NUM_REPLICAS);
-```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/index.md b/src/site/markdown/index.md
index fd42592..a2a8b8c 100644
--- a/src/site/markdown/index.md
+++ b/src/site/markdown/index.md
@@ -21,51 +21,23 @@ under the License.
   <title>Home</title>
 </head>
 
-Navigating the Documentation
-----------------------------
+News
+----
 
-### Conceptual Understanding
+Apache Helix has two new releases:
 
-[Concepts / Terminology](./Concepts.html)
+* [0.7.0-incubating](./site-releases/0.7.0-incubating-site/index.html) - A release that includes high-level APIs to logically interact with Participants, Controllers, Resources, and other Helix constructs. This release should be considered alpha, but contains many new features, is backward-compatible, and is the basis for future development of Helix. [\[Release Notes\]](./releasenotes/release-0.7.0-incubating.html)
 
-[Architecture](./Architecture.html)
+* [0.6.2-incubating](./site-releases/0.6.2-incubating-site/index.html) - A bug and security fix release hardening the Helix platform. [\[Release Notes\]](./releasenotes/release-0.6.2-incubating.html)
 
-### Hands-on Helix
 
-[Quickstart](./Quickstart.html)
-
-[Tutorial](./Tutorial.html)
-
-[Javadocs](./apidocs/index.html)
-
-[IRC](./IRC.html)
-
-### Recipes
-
-[Distributed lock manager](./recipes/lock_manager.html)
-
-[Rabbit MQ consumer group](./recipes/rabbitmq_consumer_group.html)
-
-[Rsync replicated file store](./recipes/rsync_replicated_file_store.html)
-
-[Service discovery](./recipes/service_discovery.html)
-
-[Distributed Task DAG Execution](./recipes/task_dag_execution.html)
-
-[User-Defined Rebalancer Example](./recipes/user_def_rebalancer.html)
-
-### Download
-
-[Current Release](./download.html)
-
-
-What Is Helix
+What Is Helix?
 --------------
 Helix is a generic _cluster management_ framework used for the automatic management of partitioned, replicated and distributed resources hosted on a cluster of nodes.
 
 
-What Is Cluster Management
---------------------------
+What Is Cluster Management?
+---------------------------
 To understand Helix, first you need to understand _cluster management_.  A distributed system typically runs on multiple nodes for the following reasons:
 
 * scalability
@@ -82,8 +54,8 @@ Each node performs one or more of the primary function of the cluster, such as s
 While it is possible to integrate these functions into the distributed system, it complicates the code.  Helix has abstracted common cluster management tasks, enabling the system builder to model the desired behavior with a declarative state model, and let Helix manage the coordination.  The result is less new code to write, and a robust, highly operable system.
 
 
-Key Features of Helix
----------------------
+What does Helix provide?
+------------------------
 1. Automatic assignment of resources and partitions to nodes
 2. Node failure detection and recovery
 3. Dynamic addition of resources
@@ -93,8 +65,8 @@ Key Features of Helix
 7. Optional pluggable rebalancing for user-defined assignment of resources and partitions
 
 
-Why Helix
----------
+Why Helix?
+----------
 Modeling a distributed system as a state machine with constraints on states and transitions has the following benefits:
 
 * Separates cluster management from the core functionality of the system.
@@ -102,40 +74,32 @@ Modeling a distributed system as a state machine with constraints on states and
 * Increases simplicity: system components do not have to manage a global cluster.  This division of labor makes it easier to build, debug, and maintain your system.
 
 
+Download
+--------
+
+[0.7.0-incubating](./site-releases/0.7.0-incubating-site/download.html)
+
+[0.6.2-incubating](./site-releases/0.6.2-incubating-site/download.html)
+
 Build Instructions
 ------------------
 
 Requirements: JDK 1.6+, Maven 2.0.8+
 
 ```
-    git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
-    cd incubator-helix
-    git checkout tags/helix-0.6.1-incubating
-    mvn install package -DskipTests
+git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+cd incubator-helix
+git checkout tags/helix-0.7.0-incubating
+mvn install package -DskipTests
 ```
 
 Maven dependency
 
 ```
-    <dependency>
-      <groupId>org.apache.helix</groupId>
-      <artifactId>helix-core</artifactId>
-      <version>0.6.1-incubating</version>
-    </dependency>
+<dependency>
+  <groupId>org.apache.helix</groupId>
+  <artifactId>helix-core</artifactId>
+  <version>0.7.0-incubating</version>
+</dependency>
 ```
 
-[Download](./download.html) Helix artifacts from here.
-
-Publications
--------------
-
-* Untangling cluster management using Helix at [SOCC Oct 2012](http://www.socc2012.org/home/program)
-    - [paper](https://915bbc94-a-62cb3a1a-s-sites.googlegroups.com/site/acm2012socc/helix_onecol.pdf)
-    - [presentation](http://www.slideshare.net/KishoreGopalakrishna/helix-socc-v10final)
-* Building distributed systems using Helix Apache Con Feb 2013
-    - [presentation at ApacheCon](http://www.slideshare.net/KishoreGopalakrishna/apache-con-buildingddsusinghelix)
-    - [presentation at VMWare](http://www.slideshare.net/KishoreGopalakrishna/apache-helix-presentation-at-vmware)
-* Data driven testing:
-    - [short talk at LSPE meetup](http://www.slideshare.net/KishoreGopalakrishna/data-driven-testing)
-    - [paper DBTest 2013 acm SIGMOD:will be published on Jun 24, 2013](http://dbtest2013.soe.ucsc.edu/Program.htm)
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/involved/building.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/involved/building.md b/src/site/markdown/involved/building.md
index 06f0589..ea8c081 100644
--- a/src/site/markdown/involved/building.md
+++ b/src/site/markdown/involved/building.md
@@ -24,7 +24,7 @@ Building Apache Helix
 First you need to install Apache Maven.
 
 To install jars locally:
-mvn clean install (-DskipTests if you don't want to run tests).
 
-
-   
+```
+mvn clean install (-DskipTests if you don't want to run tests)
+```


[10/16] [HELIX-270] Include documentation for previous version on the website

Posted by ka...@apache.org.
http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
new file mode 100644
index 0000000..9c24b43
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_admin.md
@@ -0,0 +1,407 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Admin Operations</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Admin Operations
+
+Helix provides a set of admin api for cluster management operations. They are supported via:
+
+* _Java API_
+* _Commandline interface_
+* _REST interface via helix-admin-webapp_
+
+### Java API
+See interface [_org.apache.helix.HelixAdmin_](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/reference/org/apache/helix/HelixAdmin.html)
+
+### Command-line interface
+The command-line tool comes with helix-core package:
+
+Get the command-line tool:
+
+``` 
+  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+  - cd incubator-helix
+  - ./build
+  - cd helix-core/target/helix-core-pkg/bin
+  - chmod +x *.sh
+```
+
+Get help:
+
+```
+  - ./helix-admin.sh --help
+```
+
+All other commands have this form:
+
+```
+  ./helix-admin.sh --zkSvr <ZookeeperServerAddress> <command> <parameters>
+```
+
+Admin commands and brief description:
+
+| Command syntax | Description |
+| -------------- | ----------- |
+| _\-\-activateCluster \<clusterName controllerCluster true/false\>_ | Enable/disable a cluster in distributed controller mode |
+| _\-\-addCluster \<clusterName\>_ | Add a new cluster |
+| _\-\-addIdealState \<clusterName resourceName fileName.json\>_ | Add an ideal state to a cluster |
+| _\-\-addInstanceTag \<clusterName instanceName tag\>_ | Add a tag to an instance |
+| _\-\-addNode \<clusterName instanceId\>_ | Add an instance to a cluster |
+| _\-\-addResource \<clusterName resourceName partitionNumber stateModelName\>_ | Add a new resource to a cluster |
+| _\-\-addResourceProperty \<clusterName resourceName propertyName propertyValue\>_ | Add a resource property |
+| _\-\-addStateModelDef \<clusterName fileName.json\>_ | Add a State model definition to a cluster |
+| _\-\-dropCluster \<clusterName\>_ | Delete a cluster |
+| _\-\-dropNode \<clusterName instanceId\>_ | Remove a node from a cluster |
+| _\-\-dropResource \<clusterName resourceName\>_ | Remove an existing resource from a cluster |
+| _\-\-enableCluster \<clusterName true/false\>_ | Enable/disable a cluster |
+| _\-\-enableInstance \<clusterName instanceId true/false\>_ | Enable/disable an instance |
+| _\-\-enablePartition \<true/false clusterName nodeId resourceName partitionName\>_ | Enable/disable a partition |
+| _\-\-getConfig \<configScope configScopeArgs configKeys\>_ | Get user configs |
+| _\-\-getConstraints \<clusterName constraintType\>_ | Get constraints |
+| _\-\-help_ | print help information |
+| _\-\-instanceGroupTag \<instanceTag\>_ | Specify instance group tag, used with rebalance command |
+| _\-\-listClusterInfo \<clusterName\>_ | Show information of a cluster |
+| _\-\-listClusters_ | List all clusters |
+| _\-\-listInstanceInfo \<clusterName instanceId\>_ | Show information of an instance |
+| _\-\-listInstances \<clusterName\>_ | List all instances in a cluster |
+| _\-\-listPartitionInfo \<clusterName resourceName partitionName\>_ | Show information of a partition |
+| _\-\-listResourceInfo \<clusterName resourceName\>_ | Show information of a resource |
+| _\-\-listResources \<clusterName\>_ | List all resources in a cluster |
+| _\-\-listStateModel \<clusterName stateModelName\>_ | Show information of a state model |
+| _\-\-listStateModels \<clusterName\>_ | List all state models in a cluster |
+| _\-\-maxPartitionsPerNode \<maxPartitionsPerNode\>_ | Specify the max partitions per instance, used with addResourceGroup command |
+| _\-\-rebalance \<clusterName resourceName replicas\>_ | Rebalance a resource |
+| _\-\-removeConfig \<configScope configScopeArgs configKeys\>_ | Remove user configs |
+| _\-\-removeConstraint \<clusterName constraintType constraintId\>_ | Remove a constraint |
+| _\-\-removeInstanceTag \<clusterName instanceId tag\>_ | Remove a tag from an instance |
+| _\-\-removeResourceProperty \<clusterName resourceName propertyName\>_ | Remove a resource property |
+| _\-\-resetInstance \<clusterName instanceId\>_ | Reset all erroneous partitions on an instance |
+| _\-\-resetPartition \<clusterName instanceId resourceName partitionName\>_ | Reset an erroneous partition |
+| _\-\-resetResource \<clusterName resourceName\>_ | Reset all erroneous partitions of a resource |
+| _\-\-setConfig \<configScope configScopeArgs configKeyValueMap\>_ | Set user configs |
+| _\-\-setConstraint \<clusterName constraintType constraintId constraintKeyValueMap\>_ | Set a constraint |
+| _\-\-swapInstance \<clusterName oldInstance newInstance\>_ | Swap an old instance with a new instance |
+| _\-\-zkSvr \<ZookeeperServerAddress\>_ | Provide zookeeper address |
+
+### REST interface
+
+The REST interface comes wit helix-admin-webapp package:
+
+``` 
+  - git clone https://git-wip-us.apache.org/repos/asf/incubator-helix.git
+  - cd incubator-helix 
+  - ./build
+  - cd helix-admin-webapp/target/helix-admin-webapp-pkg/bin
+  - chmod +x *.sh
+  - ./run-rest-admin.sh --zkSvr <zookeeperAddress> --port <port> // make sure zookeeper is running
+```
+
+#### URL and support methods
+
+* _/clusters_
+    * List all clusters
+
+    ```
+      curl http://localhost:8100/clusters
+    ```
+
+    * Add a cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"addCluster","clusterName":"MyCluster"}' -H "Content-Type: application/json" http://localhost:8100/clusters
+    ```
+
+* _/clusters/{clusterName}_
+    * List cluster information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster
+    ```
+
+    * Enable/disable a cluster in distributed controller mode
+    
+    ```
+      curl -d 'jsonParameters={"command":"activateCluster","grandCluster":"MyControllerCluster","enabled":"true"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster
+    ```
+
+    * Remove a cluster
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster
+    ```
+    
+* _/clusters/{clusterName}/resourceGroups_
+    * List all resources in a cluster
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups
+    ```
+    
+    * Add a resource to cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"addResource","resourceGroupName":"MyDB","partitions":"8","stateModelDefRef":"MasterSlave" }' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups
+    ```
+
+* _/clusters/{clusterName}/resourceGroups/{resourceName}_
+    * List resource information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+    
+    * Drop a resource
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+
+    * Reset all erroneous partitions of a resource
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetResource"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB
+    ```
+
+* _/clusters/{clusterName}/resourceGroups/{resourceName}/idealState_
+    * Rebalance a resource
+    
+    ```
+      curl -d 'jsonParameters={"command":"rebalance","replicas":"3"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+
+    * Add an ideal state
+    
+    ```
+    echo jsonParameters={
+    "command":"addIdealState"
+       }&newIdealState={
+      "id" : "MyDB",
+      "simpleFields" : {
+        "IDEAL_STATE_MODE" : "AUTO",
+        "NUM_PARTITIONS" : "8",
+        "REBALANCE_MODE" : "SEMI_AUTO",
+        "REPLICAS" : "0",
+        "STATE_MODEL_DEF_REF" : "MasterSlave",
+        "STATE_MODEL_FACTORY_NAME" : "DEFAULT"
+      },
+      "listFields" : {
+      },
+      "mapFields" : {
+        "MyDB_0" : {
+          "localhost_1001" : "MASTER",
+          "localhost_1002" : "SLAVE"
+        }
+      }
+    }
+    > newIdealState.json
+    curl -d @'./newIdealState.json' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+    
+    * Add resource property
+    
+    ```
+      curl -d 'jsonParameters={"command":"addResourceProperty","REBALANCE_TIMER_PERIOD":"500"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/idealState
+    ```
+    
+* _/clusters/{clusterName}/resourceGroups/{resourceName}/externalView_
+    * Show resource external view
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/resourceGroups/MyDB/externalView
+    ```
+* _/clusters/{clusterName}/instances_
+    * List all instances
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/instances
+    ```
+
+    * Add an instance
+    
+    ```
+    curl -d 'jsonParameters={"command":"addInstance","instanceNames":"localhost_1001"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    ```
+    
+    * Swap an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"swapInstance","oldInstance":"localhost_1001", "newInstance":"localhost_1002"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances
+    ```
+* _/clusters/{clusterName}/instances/{instanceName}_
+    * Show instance information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Enable/disable an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"enableInstance","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+    * Drop an instance
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Disable/enable partitions on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"enablePartition","resource": "MyDB","partition":"MyDB_0",  "enabled" : "false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+    
+    * Reset an erroneous partition on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetPartition","resource": "MyDB","partition":"MyDB_0"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+    * Reset all erroneous partitions on an instance
+    
+    ```
+      curl -d 'jsonParameters={"command":"resetInstance"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/instances/localhost_1001
+    ```
+
+* _/clusters/{clusterName}/configs_
+    * Get user cluster level config
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+    
+    * Set user cluster level config
+    
+    ```
+      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+
+    * Remove user cluster level config
+    
+    ```
+    curl -d 'jsonParameters={"command":"removeConfig","configs":"key1,key2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/cluster
+    ```
+    
+    * Get/set/remove user participant level config
+    
+    ```
+      curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/participant/localhost_1001
+    ```
+    
+    * Get/set/remove resource level config
+    
+    ```
+    curl -d 'jsonParameters={"command":"setConfig","configs":"key1=value1,key2=value2"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/configs/resource/MyDB
+    ```
+
+* _/clusters/{clusterName}/controller_
+    * Show controller information
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/Controller
+    ```
+    
+    * Enable/disable cluster
+    
+    ```
+      curl -d 'jsonParameters={"command":"enableCluster","enabled":"false"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/Controller
+    ```
+
+* _/zkPath/{path}_
+    * Get information for zookeeper path
+    
+    ```
+      curl http://localhost:8100/zkPath/MyCluster
+    ```
+
+* _/clusters/{clusterName}/StateModelDefs_
+    * Show all state model definitions
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/StateModelDefs
+    ```
+
+    * Add a state mdoel definition
+    
+    ```
+      echo jsonParameters={
+        "command":"addStateModelDef"
+       }&newStateModelDef={
+          "id" : "OnlineOffline",
+          "simpleFields" : {
+            "INITIAL_STATE" : "OFFLINE"
+          },
+          "listFields" : {
+            "STATE_PRIORITY_LIST" : [ "ONLINE", "OFFLINE", "DROPPED" ],
+            "STATE_TRANSITION_PRIORITYLIST" : [ "OFFLINE-ONLINE", "ONLINE-OFFLINE", "OFFLINE-DROPPED" ]
+          },
+          "mapFields" : {
+            "DROPPED.meta" : {
+              "count" : "-1"
+            },
+            "OFFLINE.meta" : {
+              "count" : "-1"
+            },
+            "OFFLINE.next" : {
+              "DROPPED" : "DROPPED",
+              "ONLINE" : "ONLINE"
+            },
+            "ONLINE.meta" : {
+              "count" : "R"
+            },
+            "ONLINE.next" : {
+              "DROPPED" : "OFFLINE",
+              "OFFLINE" : "OFFLINE"
+            }
+          }
+        }
+        > newStateModelDef.json
+        curl -d @'./untitled.txt' -H 'Content-Type: application/json' http://localhost:8100/clusters/MyCluster/StateModelDefs
+    ```
+
+* _/clusters/{clusterName}/StateModelDefs/{stateModelDefName}_
+    * Show a state model definition
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/StateModelDefs/OnlineOffline
+    ```
+
+* _/clusters/{clusterName}/constraints/{constraintType}_
+    * Show all contraints
+    
+    ```
+      curl http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT
+    ```
+
+    * Set a contraint
+    
+    ```
+       curl -d 'jsonParameters={"constraintAttributes":"RESOURCE=MyDB,CONSTRAINT_VALUE=1"}' -H "Content-Type: application/json" http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    ```
+    
+    * Remove a constraint
+    
+    ```
+      curl -X DELETE http://localhost:8100/clusters/MyCluster/constraints/MESSAGE_CONSTRAINT/MyConstraint
+    ```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
new file mode 100644
index 0000000..8e7e7ad
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_controller.md
@@ -0,0 +1,94 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Controller</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Controller
+
+Next, let\'s implement the controller.  This is the brain of the cluster.  Helix makes sure there is exactly one active controller running the cluster.
+
+### Start the Helix agent
+
+
+It requires the following parameters:
+ 
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceType: Type of the process. This can be one of the following types, in this case use CONTROLLER:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
+    * PARTICIPANT: Process that performs the actual task in the distributed system. 
+    * SPECTATOR: Process that observes the changes in the cluster.
+    * ADMIN: To carry out system admin actions.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+
+```
+      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                      instanceName,
+                                                      instanceType,
+                                                      zkConnectString);
+```
+
+### Controller Code
+
+The Controller needs to know about all changes in the cluster. Helix takes care of this with the default implementation.
+If you need additional functionality, see GenericHelixController on how to configure the pipeline.
+
+```
+      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.CONTROLLER,
+                                                          zkConnectString);
+     manager.connect();
+     GenericHelixController controller = new GenericHelixController();
+     manager.addConfigChangeListener(controller);
+     manager.addLiveInstanceChangeListener(controller);
+     manager.addIdealStateChangeListener(controller);
+     manager.addExternalViewChangeListener(controller);
+     manager.addControllerListener(controller);
+```
+The snippet above shows how the controller is started. You can also start the controller using command line interface.
+  
+```
+cd helix/helix-core/target/helix-core-pkg/bin
+./run-helix-controller.sh --zkSvr <Zookeeper ServerAddress (Required)>  --cluster <Cluster name (Required)>
+```
+
+### Controller deployment modes
+
+Helix provides multiple options to deploy the controller.
+
+#### STANDALONE
+
+The Controller can be started as a separate process to manage a cluster. This is the recommended approach. However, since one controller can be a single point of failure, multiple controller processes are required for reliability.  Even if multiple controllers are running, only one will be actively managing the cluster at any time and is decided by a leader-election process. If the leader fails, another leader will take over managing the cluster.
+
+Even though we recommend this method of deployment, it has the drawback of having to manage an additional service for each cluster. See Controller As a Service option.
+
+#### EMBEDDED
+
+If setting up a separate controller process is not viable, then it is possible to embed the controller as a library in each of the participants.
+
+#### CONTROLLER AS A SERVICE
+
+One of the cool features we added in Helix is to use a set of controllers to manage a large number of clusters. 
+
+For example if you have X clusters to be managed, instead of deploying X*3 (3 controllers for fault tolerance) controllers for each cluster, one can deploy just 3 controllers.  Each controller can manage X/3 clusters.  If any controller fails, the remaining two will manage X/2 clusters.
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
new file mode 100644
index 0000000..e1a7f3c
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_health.md
@@ -0,0 +1,46 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Customizing Heath Checks</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Customizing Health Checks
+
+In this chapter, we\'ll learn how to customize the health check, based on metrics of your distributed system.  
+
+### Health Checks
+
+Note: _this in currently in development mode, not yet ready for production._
+
+Helix provides the ability for each node in the system to report health metrics on a periodic basis. 
+
+Helix supports multiple ways to aggregate these metrics:
+
+* SUM
+* AVG
+* EXPONENTIAL DECAY
+* WINDOW
+
+Helix persists the aggregated value only.
+
+Applications can define a threshold on the aggregate values according to the SLAs, and when the SLA is violated Helix will fire an alert. 
+Currently Helix only fires an alert, but in a future release we plan to use these metrics to either mark the node dead or load balance the partitions.
+This feature will be valuable for distributed systems that support multi-tenancy and have a large variation in work load patterns.  In addition, this can be used to detect skewed partitions (hotspots) and rebalance the cluster.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
new file mode 100644
index 0000000..e1f0385
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_messaging.md
@@ -0,0 +1,71 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Messaging</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Messaging
+
+In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster.  This is an interesting feature which is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.  
+
+### Example: Bootstrapping a Replica
+
+Consider a search system  where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
+
+Helix provides a messaging API for intra-cluster communication between nodes in the system.  Helix provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames.  Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
+Since Helix is aware of the global state of the system, it can send the message to appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
+
+This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
+System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
+
+```
+      ClusterMessagingService messagingService = manager.getMessagingService();
+
+      // Construct the Message
+      Message requestBackupUriRequest = new Message(
+          MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+      requestBackupUriRequest
+          .setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
+      requestBackupUriRequest.setMsgState(MessageState.NEW);
+
+      // Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
+      Criteria recipientCriteria = new Criteria();
+      recipientCriteria.setInstanceName("%");
+      recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      recipientCriteria.setResource("MyDB");
+      recipientCriteria.setPartition("");
+
+      // Should be processed only by process(es) that are active at the time of sending the message
+      //   This means if the recipient is restarted after message is sent, it will not be processe.
+      recipientCriteria.setSessionSpecific(true);
+
+      // wait for 30 seconds
+      int timeout = 30000;
+
+      // the handler that will be invoked when any recipient responds to the message.
+      BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
+
+      // this will return only after all recipients respond or after timeout
+      int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
+          requestBackupUriRequest, responseHandler, timeout);
+```
+
+See HelixManager.DefaultMessagingService in [Javadocs](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more info.
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
new file mode 100644
index 0000000..d2812da
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_participant.md
@@ -0,0 +1,105 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Participant</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Participant
+
+In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
+
+
+### Start the Helix agent
+
+The Helix agent is a common component that connects each system component with the controller.
+
+It requires the following parameters:
+ 
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceType: Type of the process. This can be one of the following types, in this case, use PARTICIPANT
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
+    * PARTICIPANT: Process that performs the actual task in the distributed system. 
+    * SPECTATOR: Process that observes the changes in the cluster.
+    * ADMIN: To carry out system admin actions.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
+
+After the Helix manager instance is created, only thing that needs to be registered is the state model factory. 
+The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
+
+* MasterSlaveStateModelFactory
+* LeaderStandbyStateModelFactory
+* BootstrapHandler
+* _An application defined state model factory_
+
+
+```
+      manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.PARTICIPANT,
+                                                          zkConnectString);
+     StateMachineEngine stateMach = manager.getStateMachineEngine();
+
+     //create a stateModelFactory that returns a statemodel object for each partition. 
+     stateModelFactory = new OnlineOfflineStateModelFactory();     
+     stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
+     manager.connect();
+```
+
+Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
+
+```
+public class OnlineOfflineStateModelFactory extends
+        StateModelFactory<StateModel> {
+    @Override
+    public StateModel createNewStateModel(String stateUnitKey) {
+        OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
+        return stateModel;
+    }
+    @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
+    public static class OnlineOfflineStateModel extends StateModel {
+
+        @Transition(from = "OFFLINE", to = "ONLINE")
+        public void onBecomeOnlineFromOffline(Message message,
+                NotificationContext context) {
+
+            System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
+
+            ////////////////////////////////////////////////////////////////////////////////////////////////
+            // Application logic to handle transition                                                     //
+            // For example, you might start a service, run initialization, etc                            //
+            ////////////////////////////////////////////////////////////////////////////////////////////////
+        }
+
+        @Transition(from = "ONLINE", to = "OFFLINE")
+        public void onBecomeOfflineFromOnline(Message message,
+                NotificationContext context) {
+
+            System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
+
+            ////////////////////////////////////////////////////////////////////////////////////////////////
+            // Application logic to handle transition                                                     //
+            // For example, you might shutdown a service, log this event, or change monitoring settings   //
+            ////////////////////////////////////////////////////////////////////////////////////////////////
+        }
+    }
+}
+```
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_propstore.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_propstore.md
new file mode 100644
index 0000000..8e7e5b5
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_propstore.md
@@ -0,0 +1,34 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Application Property Store</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Application Property Store
+
+In this chapter, we\'ll learn how to use the application property store.
+
+### Property Store
+
+It is common that an application needs support for distributed, shared data structures.  Helix uses Zookeeper to store the application data and hence provides notifications when the data changes.
+
+While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
+
+See [HelixManager.getHelixPropertyStore](http://helix.incubator.apache.org/javadocs/0.6.2-incubating/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_rebalance.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_rebalance.md
new file mode 100644
index 0000000..8f42a5a
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_rebalance.md
@@ -0,0 +1,181 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Rebalancing Algorithms</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
+
+The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
+
+This means given a number of partitions, replicas and number of nodes, Helix does the automatic assignment of partition to nodes such that:
+
+* Each node has the same number of partitions
+* Replicas of the same partition do not stay on the same node
+* When a node fails, the partitions will be equally distributed among the remaining nodes
+* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above criteria
+
+Helix employs a rebalancing algorithm to compute the _ideal state_ of the system.  When the _current state_ differs from the _ideal state_, Helix uses it as the target state of the system and computes the appropriate transitions needed to bring it to the _ideal state_.
+
+Helix makes it easy to perform this operation, while giving you control over the algorithm.  In this section, we\'ll see how to implement the desired behavior.
+
+Helix has four options for rebalancing, in increasing order of customization by the system builder:
+
+* FULL_AUTO
+* SEMI_AUTO
+* CUSTOMIZED
+* USER_DEFINED
+
+```
+            |FULL_AUTO     |  SEMI_AUTO | CUSTOMIZED|  USER_DEFINED  |
+            ---------------------------------------------------------|
+   LOCATION | HELIX        |  APP       |  APP      |      APP       |
+            ---------------------------------------------------------|
+      STATE | HELIX        |  HELIX     |  APP      |      APP       |
+            ----------------------------------------------------------
+```
+
+
+### FULL_AUTO
+
+When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. 
+
+For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "FULL_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [],
+    "MyResource_1" : [],
+    "MyResource_2" : []
+  },
+  "mapFields" : {
+  }
+}
+```
+
+If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally.  The ideal state is therefore:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
+When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node.. 
+
+#### SEMI_AUTO
+
+When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
+
+Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2.  The choice of _state_ is still controlled by Helix.  That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "SEMI_AUTO",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  }
+  "listFields" : {
+    "MyResource_0" : [node1, node2],
+    "MyResource_1" : [node2, node3],
+    "MyResource_2" : [node3, node1]
+  },
+  "mapFields" : {
+  }
+}
+```
+
+The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
+
+In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints. 
+
+#### CUSTOMIZED
+
+Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes. 
+Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
+
+Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
+
+```
+{
+  "id" : "MyResource",
+  "simpleFields" : {
+    "REBALANCE_MODE" : "CUSTOMIZED",
+    "NUM_PARTITIONS" : "3",
+    "REPLICAS" : "2",
+    "STATE_MODEL_DEF_REF" : "MasterSlave",
+  },
+  "mapFields" : {
+    "MyResource_0" : {
+      "N1" : "MASTER",
+      "N2" : "SLAVE",
+    },
+    "MyResource_1" : {
+      "N2" : "MASTER",
+      "N3" : "SLAVE",
+    },
+    "MyResource_2" : {
+      "N3" : "MASTER",
+      "N1" : "SLAVE",
+    }
+  }
+}
+```
+
+Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2. 
+
+#### USER_DEFINED
+
+For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
+
+#### Backwards Compatibility
+
+In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_spectator.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_spectator.md
new file mode 100644
index 0000000..24c1cf4
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_spectator.md
@@ -0,0 +1,76 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Spectator</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Spectator
+
+Next, we\'ll learn how to implement a Spectator.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
+
+### Start the Helix agent
+
+Same as for a Participant, The Helix agent is the common component that connects each system component with the controller.
+
+It requires the following parameters:
+
+* clusterName: A logical name to represent the group of nodes
+* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
+* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
+    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
+    * PARTICIPANT: Process that performs the actual task in the distributed system.
+    * SPECTATOR: Process that observes the changes in the cluster.
+    * ADMIN: To carry out system admin actions.
+* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
+
+After the Helix manager instance is created, only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
+
+### Spectator Code
+
+A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
+Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
+
+```
+manager = HelixManagerFactory.getZKHelixManager(clusterName,
+                                                          instanceName,
+                                                          InstanceType.PARTICIPANT,
+                                                          zkConnectString);
+manager.connect();
+RoutingTableProvider routingTableProvider = new RoutingTableProvider();
+manager.addExternalViewChangeListener(routingTableProvider);
+```
+
+In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
+
+```
+## instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
+instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
+
+////////////////////////////////////////////////////////////////////////////////////////////////
+// Application-specific code to send a request to one of the instances                        //
+////////////////////////////////////////////////////////////////////////////////////////////////
+
+theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
+result = theInstance.sendRequest(yourApplicationRequest, responseObject);
+
+```
+
+When the external view changes, the application needs to react by sending requests to a different instance.  
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_state.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_state.md
new file mode 100644
index 0000000..4f7b1b5
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_state.md
@@ -0,0 +1,131 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - State Machine Configuration</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): State Machine Configuration
+
+In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
+
+## State Models
+
+Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster. 
+Every resource that is added should be configured to use a state model that govern its _ideal state_.
+
+### MASTER-SLAVE
+
+* 3 states: OFFLINE, SLAVE, MASTER
+* Maximum number of masters: 1
+* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
+
+
+### ONLINE-OFFLINE
+
+* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
+
+### LEADER-STANDBY
+
+* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
+
+## Constraints
+
+In addition to the state machine configuration, one can specify the constraints of states and transitions.
+
+For example, one can say:
+
+* MASTER:1
+<br/>Maximum number of replicas in MASTER state at any time is 1
+
+* OFFLINE-SLAVE:5 
+<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
+
+### Dynamic State Constraints
+
+We also support two dynamic upper bounds for the number of replicas in each state:
+
+* N: The number of replicas in the state is at most the number of live participants in the cluster
+* R: The number of replicas in the state is at most the specified replica count for the partition
+
+### State Priority
+
+Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
+
+### State Transition Priority
+
+Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
+
+## Special States
+
+### DROPPED
+
+The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
+
+* The DROPPED state must be defined
+* There must be a path to DROPPED for every state in the model
+
+### ERROR
+
+The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
+
+## Annotated Example
+
+Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
+
+```
+StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
+  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
+  .initialState("OFFLINE")
+
+  // Lowest number here indicates highest priority, no value indicates lowest priority
+  .addState("MASTER", 1)
+  .addState("SLAVE", 2)
+  .addState("OFFLINE")
+
+  // Note the special inclusion of the DROPPED state (REQUIRED)
+  .addState(HelixDefinedState.DROPPED.toString())
+
+  // No more than one master allowed
+  .upperBound("MASTER", 1)
+
+  // R indicates an upper bound of number of replicas for each partition
+  .dynamicUpperBound("SLAVE", "R")
+
+  // Add some high-priority transitions
+  .addTransition("SLAVE", "MASTER", 1)
+  .addTransition("OFFLINE", "SLAVE", 2)
+
+  // Using the same priority value indicates that these transitions can fire in any order
+  .addTransition("MASTER", "SLAVE", 3)
+  .addTransition("SLAVE", "OFFLINE", 3)
+
+  // Not specifying a value defaults to lowest priority
+  // Notice the inclusion of the OFFLINE to DROPPED transition
+  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
+  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
+
+  // Create the StateModelDefinition instance
+  .build();
+
+  // Use the isValid() function to make sure the StateModelDefinition will work without issues
+  Assert.assertTrue(stateModel.isValid());
+```
+
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_throttling.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_throttling.md
new file mode 100644
index 0000000..2317cf1
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_throttling.md
@@ -0,0 +1,38 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - Throttling</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): Throttling
+
+In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge is capable of coordinating this decision.
+
+### Throttling
+
+Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight, but some might involve moving data, which is quite expensive from a network and iops perspective.
+
+Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
+
+* MessageType e.g STATE_TRANSITION
+* TransitionType e.g SLAVE-MASTER
+* Resource e.g database
+* Node i.e per-node maximum transitions in parallel
+

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_user_def_rebalancer.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_user_def_rebalancer.md
new file mode 100644
index 0000000..7590002
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_user_def_rebalancer.md
@@ -0,0 +1,172 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - User-Defined Rebalancing</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): User-Defined Rebalancing
+
+Even though Helix can compute both the location and the state of replicas internally using a default fully-automatic rebalancer, specific applications may require rebalancing strategies that optimize for different requirements. Thus, Helix allows applications to plug in arbitrary rebalancer algorithms that implement a provided interface. One of the main design goals of Helix is to provide maximum flexibility to any distributed application. Thus, it allows applications to fully implement the rebalancer, which is the core constraint solver in the system, if the application developer so chooses.
+
+Whenever the state of the cluster changes, as is the case when participants join or leave the cluster, Helix automatically calls the rebalancer to compute a new mapping of all the replicas in the resource. When using a pluggable rebalancer, the only required step is to register it with Helix. Subsequently, no additional bootstrapping steps are necessary. Helix uses reflection to look up and load the class dynamically at runtime. As a result, it is also technically possible to change the rebalancing strategy used at any time.
+
+The Rebalancer interface is as follows:
+
+```
+void init(HelixManager manager);
+
+IdealState computeNewIdealState(String resourceName, IdealState currentIdealState,
+    final CurrentStateOutput currentStateOutput, final ClusterDataCache clusterData);
+```
+The first parameter is the resource to rebalance, the second is pre-existing ideal mappings, the third is a snapshot of the actual placements and state assignments, and the fourth is a full cache of all of the cluster data available to Helix. Internally, Helix implements the same interface for its own rebalancing routines, so a user-defined rebalancer will be cognizant of the same information about the cluster as an internal implementation. Helix strives to provide applications the ability to implement algorithms that may require a large portion of the entire state of the cluster to make the best placement and state assignment decisions possible.
+
+An IdealState is a full representation of the location of each replica of each partition of a given resource. This is a simple representation of the placement that the algorithm believes is the best possible. If the placement meets all defined constraints, this is what will become the actual state of the distributed system.
+
+### Specifying a Rebalancer
+For implementations that set up the cluster through existing code, the following HelixAdmin calls will update the Rebalancer class:
+
+```
+IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
+idealState.setRebalanceMode(RebalanceMode.USER_DEFINED);
+idealState.setRebalancerClassName(className);
+helixAdmin.setResourceIdealState(clusterName, resourceName, idealState);
+```
+
+There are two key fields to set to specify that a pluggable rebalancer should be used. First, the rebalance mode should be set to USER_DEFINED, and second the rebalancer class name should be set to a class that implements Rebalancer and is within the scope of the project. The class name is a fully-qualified class name consisting of its package and its name. Without specification of the USER_DEFINED mode, the user-defined rebalancer class will not be used even if specified. Furthermore, Helix will not attempt to rebalance the resources through its standard routines if its mode is USER_DEFINED, regardless of whether or not a rebalancer class is registered.
+
+### Example
+
+In the next release (0.7.0), we will provide a full example of a user-defined rebalancer in action.
+
+Consider the case where partitions are locks in a lock manager and 6 locks are to be distributed evenly to a set of participants, and only one participant can hold each lock. We can define a rebalancing algorithm that simply takes the modulus of the lock number and the number of participants to evenly distribute the locks across participants. Helix allows capping the number of partitions a participant can accept, but since locks are lightweight, we do not need to define a restriction in this case. The following is a succinct implementation of this algorithm.
+
+```
+@Override
+IdealState computeNewIdealState(String resourceName, IdealState currentIdealState,
+    final CurrentStateOutput currentStateOutput, final ClusterDataCache clusterData) {
+  // Get the list of live participants in the cluster
+  List<String> liveParticipants = new ArrayList<String>(clusterData.getLiveInstances().keySet());
+
+  // Count the number of participants allowed to lock each lock (in this example, this is 1)
+  int lockHolders = Integer.parseInt(currentIdealState.getReplicas());
+
+  // Fairly assign the lock state to the participants using a simple mod-based sequential
+  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
+  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
+  // number of participants as necessary.
+  int i = 0;
+  for (String partition : currentIdealState.getPartitionSet()) {
+    List<String> preferenceList = new ArrayList<String>();
+    for (int j = i; j < i + lockHolders; j++) {
+      int participantIndex = j % liveParticipants.size();
+      String participant = liveParticipants.get(participantIndex);
+      // enforce that a participant can only have one instance of a given lock
+      if (!preferenceList.contains(participant)) {
+        preferenceList.add(participant);
+      }
+    }
+    currentIdealState.setPreferenceList(partition, preferenceList);
+    i++;
+  }
+  return assignment;
+}
+```
+
+Here are the IdealState preference lists emitted by the user-defined rebalancer for a 3-participant system whenever there is a change to the set of participants.
+
+* Participant_A joins
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_A"],
+  "lock_2": ["Participant_A"],
+  "lock_3": ["Participant_A"],
+  "lock_4": ["Participant_A"],
+  "lock_5": ["Participant_A"],
+}
+```
+
+A preference list is a mapping for each resource of partition to the participants serving each replica. The state model is a simple LOCKED/RELEASED model, so participant A holds all lock partitions in the LOCKED state.
+
+* Participant_B joins
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_B"],
+  "lock_2": ["Participant_A"],
+  "lock_3": ["Participant_B"],
+  "lock_4": ["Participant_A"],
+  "lock_5": ["Participant_B"],
+}
+```
+
+Now that there are two participants, the simple mod-based function assigns every other lock to the second participant. On any system change, the rebalancer is invoked so that the application can define how to redistribute its resources.
+
+* Participant_C joins (steady state)
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_B"],
+  "lock_2": ["Participant_C"],
+  "lock_3": ["Participant_A"],
+  "lock_4": ["Participant_B"],
+  "lock_5": ["Participant_C"],
+}
+```
+
+This is the steady state of the system. Notice that four of the six locks now have a different owner. That is because of the naïve modulus-based assignmemt approach used by the user-defined rebalancer. However, the interface is flexible enough to allow you to employ consistent hashing or any other scheme if minimal movement is a system requirement.
+
+* Participant_B fails
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_C"],
+  "lock_2": ["Participant_A"],
+  "lock_3": ["Participant_C"],
+  "lock_4": ["Participant_A"],
+  "lock_5": ["Participant_C"],
+}
+```
+
+On any node failure, as in the case of node addition, the rebalancer is invoked automatically so that it can generate a new mapping as a response to the change. Helix ensures that the Rebalancer has the opportunity to reassign locks as required by the application.
+
+* Participant_B (or the replacement for the original Participant_B) rejoins
+
+```
+{
+  "lock_0": ["Participant_A"],
+  "lock_1": ["Participant_B"],
+  "lock_2": ["Participant_C"],
+  "lock_3": ["Participant_A"],
+  "lock_4": ["Participant_B"],
+  "lock_5": ["Participant_C"],
+}
+```
+
+The rebalancer was invoked once again and the resulting IdealState preference lists reflect the steady state.
+
+### Caveats
+- The rebalancer class must be available at runtime, or else Helix will not attempt to rebalance at all
+- The Helix controller will only take into account the preference lists in the new IdealState for this release. In 0.7.0, Helix rebalancers will be able to compute the full resource assignment, including the states.
+- Helix does not currently persist the new IdealState computed by the user-defined rebalancer. However, the Helix property store is available for saving any computed state. In 0.7.0, Helix will persist the result of running the rebalancer.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/markdown/tutorial_yaml.md
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/markdown/tutorial_yaml.md b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_yaml.md
new file mode 100644
index 0000000..0f8e0cc
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/markdown/tutorial_yaml.md
@@ -0,0 +1,102 @@
+<!---
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<head>
+  <title>Tutorial - YAML Cluster Setup</title>
+</head>
+
+# [Helix Tutorial](./Tutorial.html): YAML Cluster Setup
+
+As an alternative to using Helix Admin to set up the cluster, its resources, constraints, and the state model, Helix supports bootstrapping a cluster configuration based on a YAML file. Below is an annotated example of such a file for a simple distributed lock manager where a lock can only be LOCKED or RELEASED, and each lock only allows a single participant to hold it in the LOCKED state.
+
+```
+clusterName: lock-manager-custom-rebalancer # unique name for the cluster (required)
+resources:
+  - name: lock-group # unique resource name (required)
+    rebalancer: # required
+      mode: USER_DEFINED # required - USER_DEFINED means we will provide our own rebalancer
+      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer # required for USER_DEFINED
+    partitions:
+      count: 12 # number of partitions for the resource (default is 1)
+      replicas: 1 # number of replicas per partition (default is 1)
+    stateModel:
+      name: lock-unlock # model name (required)
+      states: [LOCKED, RELEASED, DROPPED] # the list of possible states (required if model not built-in)
+      transitions: # the list of possible transitions (required if model not built-in)
+        - name: Unlock
+          from: LOCKED
+          to: RELEASED
+        - name: Lock
+          from: RELEASED
+          to: LOCKED
+        - name: DropLock
+          from: LOCKED
+          to: DROPPED
+        - name: DropUnlock
+          from: RELEASED
+          to: DROPPED
+        - name: Undrop
+          from: DROPPED
+          to: RELEASED
+      initialState: RELEASED # (required if model not built-in)
+    constraints:
+      state:
+        counts: # maximum number of replicas of a partition that can be in each state (required if model not built-in)
+          - name: LOCKED
+            count: "1"
+          - name: RELEASED
+            count: "-1"
+          - name: DROPPED
+            count: "-1"
+        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority (all priorities equal if not specified)
+      transition: # transitions priority to enforce order that transitions occur
+        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock] # all priorities equal if not specified
+participants: # list of nodes that can serve replicas (optional if dynamic joining is active, required otherwise)
+  - name: localhost_12001
+    host: localhost
+    port: 12001
+  - name: localhost_12002
+    host: localhost
+    port: 12002
+  - name: localhost_12003
+    host: localhost
+    port: 12003
+```
+
+Using a file like the one above, the cluster can be set up either with the command line:
+
+```
+incubator-helix/helix-core/target/helix-core/pkg/bin/YAMLClusterSetup.sh localhost:2199 lock-manager-config.yaml
+```
+
+or with code:
+
+```
+YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
+InputStream input =
+    Thread.currentThread().getContextClassLoader()
+        .getResourceAsStream("lock-manager-config.yaml");
+YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
+```
+
+Some notes:
+
+- A rebalancer class is only required for the USER_DEFINED mode. It is ignored otherwise.
+
+- Built-in state models, like OnlineOffline, LeaderStandby, and MasterSlave, or state models that have already been added only require a name for stateModel. If partition and/or replica counts are not provided, a value of 1 is assumed.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/.htaccess
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/.htaccess b/site-releases/0.6.2-incubating/src/site/resources/.htaccess
new file mode 100644
index 0000000..d5c7bf3
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/resources/.htaccess
@@ -0,0 +1,20 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+Redirect /download.html /download.cgi

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/download.cgi
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/download.cgi b/site-releases/0.6.2-incubating/src/site/resources/download.cgi
new file mode 100644
index 0000000..f9a0e30
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/resources/download.cgi
@@ -0,0 +1,22 @@
+#!/bin/sh
+# Just call the standard mirrors.cgi script. It will use download.html
+# as the input template.
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+exec /www/www.apache.org/dyn/mirrors/mirrors.cgi $*

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/HELIX-components.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/HELIX-components.png b/site-releases/0.6.2-incubating/src/site/resources/images/HELIX-components.png
new file mode 100644
index 0000000..c0c35ae
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/HELIX-components.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/PFS-Generic.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/PFS-Generic.png b/site-releases/0.6.2-incubating/src/site/resources/images/PFS-Generic.png
new file mode 100644
index 0000000..7eea3a0
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/PFS-Generic.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/RSYNC_BASED_PFS.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/RSYNC_BASED_PFS.png b/site-releases/0.6.2-incubating/src/site/resources/images/RSYNC_BASED_PFS.png
new file mode 100644
index 0000000..0cc55ae
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/RSYNC_BASED_PFS.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/bootstrap_statemodel.gif
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/bootstrap_statemodel.gif b/site-releases/0.6.2-incubating/src/site/resources/images/bootstrap_statemodel.gif
new file mode 100644
index 0000000..b8f8a42
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/bootstrap_statemodel.gif differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/helix-architecture.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/helix-architecture.png b/site-releases/0.6.2-incubating/src/site/resources/images/helix-architecture.png
new file mode 100644
index 0000000..6f69a2d
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/helix-architecture.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/helix-logo.jpg
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/helix-logo.jpg b/site-releases/0.6.2-incubating/src/site/resources/images/helix-logo.jpg
new file mode 100644
index 0000000..d6428f6
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/helix-logo.jpg differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/helix-znode-layout.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/helix-znode-layout.png b/site-releases/0.6.2-incubating/src/site/resources/images/helix-znode-layout.png
new file mode 100644
index 0000000..5bafc45
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/helix-znode-layout.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/statemachine.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/statemachine.png b/site-releases/0.6.2-incubating/src/site/resources/images/statemachine.png
new file mode 100644
index 0000000..43d27ec
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/statemachine.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/resources/images/system.png
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/resources/images/system.png b/site-releases/0.6.2-incubating/src/site/resources/images/system.png
new file mode 100644
index 0000000..f8a05c8
Binary files /dev/null and b/site-releases/0.6.2-incubating/src/site/resources/images/system.png differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/site-releases/0.6.2-incubating/src/site/site.xml
----------------------------------------------------------------------
diff --git a/site-releases/0.6.2-incubating/src/site/site.xml b/site-releases/0.6.2-incubating/src/site/site.xml
new file mode 100644
index 0000000..68cba65
--- /dev/null
+++ b/site-releases/0.6.2-incubating/src/site/site.xml
@@ -0,0 +1,119 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+<project name="Apache Helix">
+  <bannerLeft>
+    <src>images/helix-logo.jpg</src>
+    <href>http://helix.incubator.apache.org/site-releases/0.6.2-incubating-site</href>
+  </bannerLeft>
+  <bannerRight>
+    <src>http://incubator.apache.org/images/egg-logo.png</src>
+    <href>http://incubator.apache.org/</href>
+  </bannerRight>
+  <version position="none"/>
+
+  <publishDate position="right"/>
+
+  <skin>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-fluido-skin</artifactId>
+    <version>1.3.0</version>
+  </skin>
+
+  <body>
+
+    <head>
+      <script type="text/javascript">
+
+        var _gaq = _gaq || [];
+        _gaq.push(['_setAccount', 'UA-3211522-12']);
+        _gaq.push(['_trackPageview']);
+
+        (function() {
+        var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+        ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+        var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+        })();
+
+      </script>
+
+    </head>
+
+    <breadcrumbs position="left">
+      <item name="Apache Helix" href="http://helix.incubator.apache.org/"/>
+      <item name="Release 0.6.2-incubating" href="http://helix.incubator.apache.org/site-releases/0.6.2-incubating-site/"/>
+    </breadcrumbs>
+
+    <menu name="Apache Helix">
+      <item name="Home" href="../../index.html"/>
+    </menu>
+
+    <menu name="Helix 0.6.2-incubating">
+      <item name="Introduction" href="./index.html"/>
+      <item name="Getting Helix" href="./Building.html"/>
+      <item name="Core concepts" href="./Concepts.html"/>
+      <item name="Architecture" href="./Architecture.html"/>
+      <item name="Quick Start" href="./Quickstart.html"/>
+      <item name="Tutorial" href="./Tutorial.html"/>
+      <item name="Release Notes" href="releasenotes/release-0.6.2-incubating.html"/>
+      <item name="Download" href="./download.html"/>
+    </menu>
+
+    <menu name="Recipes">
+      <item name="Distributed lock manager" href="./recipes/lock_manager.html"/>
+      <item name="Rabbit MQ consumer group" href="./recipes/rabbitmq_consumer_group.html"/>
+      <item name="Rsync replicated file store" href="./recipes/rsync_replicated_file_store.html"/>
+      <item name="Service Discovery" href="./recipes/service_discovery.html"/>
+      <item name="Distributed task DAG Execution" href="./recipes/task_dag_execution.html"/>
+    </menu>
+<!--
+    <menu ref="reports" inherit="bottom"/>
+    <menu ref="modules" inherit="bottom"/>
+
+
+    <menu name="ASF">
+      <item name="How Apache Works" href="http://www.apache.org/foundation/how-it-works.html"/>
+      <item name="Foundation" href="http://www.apache.org/foundation/"/>
+      <item name="Sponsoring Apache" href="http://www.apache.org/foundation/sponsorship.html"/>
+      <item name="Thanks" href="http://www.apache.org/foundation/thanks.html"/>
+    </menu>
+-->
+    <footer>
+      <div class="row span16"><div>Apache Helix, Apache, the Apache feather logo, and the Apache Helix project logos are trademarks of The Apache Software Foundation.
+        All other marks mentioned may be trademarks or registered trademarks of their respective owners.</div>
+        <a href="${project.url}/privacy-policy.html">Privacy Policy</a>
+      </div>
+    </footer>
+
+
+  </body>
+
+  <custom>
+    <fluidoSkin>
+      <topBarEnabled>true</topBarEnabled>
+      <!-- twitter link work only with sidebar disabled -->
+      <sideBarEnabled>true</sideBarEnabled>
+      <googleSearch></googleSearch>
+      <twitter>
+        <user>ApacheHelix</user>
+        <showUser>true</showUser>
+        <showFollowers>false</showFollowers>
+      </twitter>
+    </fluidoSkin>
+  </custom>
+
+</project>