You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@helix.apache.org by ki...@apache.org on 2013/11/20 22:12:47 UTC

[32/52] [abbrv] [HELIX-270] Include documentation for previous version on the website

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_participant.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_participant.md b/src/site/markdown/tutorial_participant.md
deleted file mode 100644
index d2812da..0000000
--- a/src/site/markdown/tutorial_participant.md
+++ /dev/null
@@ -1,105 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Participant</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Participant
-
-In this chapter, we\'ll learn how to implement a Participant, which is a primary functional component of a distributed system.
-
-
-### Start the Helix agent
-
-The Helix agent is a common component that connects each system component with the controller.
-
-It requires the following parameters:
- 
-* clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
-* instanceType: Type of the process. This can be one of the following types, in this case, use PARTICIPANT
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system. 
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3. 
-
-After the Helix manager instance is created, only thing that needs to be registered is the state model factory. 
-The methods of the State Model will be called when controller sends transitions to the Participant.  In this example, we'll use the OnlineOffline factory.  Other options include:
-
-* MasterSlaveStateModelFactory
-* LeaderStandbyStateModelFactory
-* BootstrapHandler
-* _An application defined state model factory_
-
-
-```
-      manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.PARTICIPANT,
-                                                          zkConnectString);
-     StateMachineEngine stateMach = manager.getStateMachineEngine();
-
-     //create a stateModelFactory that returns a statemodel object for each partition. 
-     stateModelFactory = new OnlineOfflineStateModelFactory();     
-     stateMach.registerStateModelFactory(stateModelType, stateModelFactory);
-     manager.connect();
-```
-
-Helix doesn\'t know what it means to change from OFFLINE\-\-\>ONLINE or ONLINE\-\-\>OFFLINE.  The following code snippet shows where you insert your system logic for these two state transitions.
-
-```
-public class OnlineOfflineStateModelFactory extends
-        StateModelFactory<StateModel> {
-    @Override
-    public StateModel createNewStateModel(String stateUnitKey) {
-        OnlineOfflineStateModel stateModel = new OnlineOfflineStateModel();
-        return stateModel;
-    }
-    @StateModelInfo(states = "{'OFFLINE','ONLINE'}", initialState = "OFFINE")
-    public static class OnlineOfflineStateModel extends StateModel {
-
-        @Transition(from = "OFFLINE", to = "ONLINE")
-        public void onBecomeOnlineFromOffline(Message message,
-                NotificationContext context) {
-
-            System.out.println("OnlineOfflineStateModel.onBecomeOnlineFromOffline()");
-
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-            // Application logic to handle transition                                                     //
-            // For example, you might start a service, run initialization, etc                            //
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-        }
-
-        @Transition(from = "ONLINE", to = "OFFLINE")
-        public void onBecomeOfflineFromOnline(Message message,
-                NotificationContext context) {
-
-            System.out.println("OnlineOfflineStateModel.onBecomeOfflineFromOnline()");
-
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-            // Application logic to handle transition                                                     //
-            // For example, you might shutdown a service, log this event, or change monitoring settings   //
-            ////////////////////////////////////////////////////////////////////////////////////////////////
-        }
-    }
-}
-```
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_propstore.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_propstore.md b/src/site/markdown/tutorial_propstore.md
deleted file mode 100644
index 377967f..0000000
--- a/src/site/markdown/tutorial_propstore.md
+++ /dev/null
@@ -1,34 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Application Property Store</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Application Property Store
-
-In this chapter, we\'ll learn how to use the application property store.
-
-### Property Store
-
-It is common that an application needs support for distributed, shared data structures.  Helix uses Zookeeper to store the application data and hence provides notifications when the data changes.
-
-While you could use Zookeeper directly, Helix supports caching the data and a write-through cache. This is far more efficient than reading from Zookeeper for every access.
-
-See [HelixManager.getHelixPropertyStore](./apidocs/reference/org/apache/helix/store/package-summary.html) for details.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_rebalance.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_rebalance.md b/src/site/markdown/tutorial_rebalance.md
deleted file mode 100644
index 8f42a5a..0000000
--- a/src/site/markdown/tutorial_rebalance.md
+++ /dev/null
@@ -1,181 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Rebalancing Algorithms</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Rebalancing Algorithms
-
-The placement of partitions in a distributed system is essential for the reliability and scalability of the system.  For example, when a node fails, it is important that the partitions hosted on that node are reallocated evenly among the remaining nodes. Consistent hashing is one such algorithm that can satisfy this guarantee.  Helix provides a variant of consistent hashing based on the RUSH algorithm, among others.
-
-This means given a number of partitions, replicas and number of nodes, Helix does the automatic assignment of partition to nodes such that:
-
-* Each node has the same number of partitions
-* Replicas of the same partition do not stay on the same node
-* When a node fails, the partitions will be equally distributed among the remaining nodes
-* When new nodes are added, the number of partitions moved will be minimized along with satisfying the above criteria
-
-Helix employs a rebalancing algorithm to compute the _ideal state_ of the system.  When the _current state_ differs from the _ideal state_, Helix uses it as the target state of the system and computes the appropriate transitions needed to bring it to the _ideal state_.
-
-Helix makes it easy to perform this operation, while giving you control over the algorithm.  In this section, we\'ll see how to implement the desired behavior.
-
-Helix has four options for rebalancing, in increasing order of customization by the system builder:
-
-* FULL_AUTO
-* SEMI_AUTO
-* CUSTOMIZED
-* USER_DEFINED
-
-```
-            |FULL_AUTO     |  SEMI_AUTO | CUSTOMIZED|  USER_DEFINED  |
-            ---------------------------------------------------------|
-   LOCATION | HELIX        |  APP       |  APP      |      APP       |
-            ---------------------------------------------------------|
-      STATE | HELIX        |  HELIX     |  APP      |      APP       |
-            ----------------------------------------------------------
-```
-
-
-### FULL_AUTO
-
-When the rebalance mode is set to FULL_AUTO, Helix controls both the location of the replica along with the state. This option is useful for applications where creation of a replica is not expensive. 
-
-For example, consider this system that uses a MasterSlave state model, with 3 partitions and 2 replicas in the ideal state.
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "REBALANCE_MODE" : "FULL_AUTO",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  }
-  "listFields" : {
-    "MyResource_0" : [],
-    "MyResource_1" : [],
-    "MyResource_2" : []
-  },
-  "mapFields" : {
-  }
-}
-```
-
-If there are 3 nodes in the cluster, then Helix will balance the masters and slaves equally.  The ideal state is therefore:
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "MyResource_0" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    },
-    "MyResource_1" : {
-      "N2" : "MASTER",
-      "N3" : "SLAVE",
-    },
-    "MyResource_2" : {
-      "N3" : "MASTER",
-      "N1" : "SLAVE",
-    }
-  }
-}
-```
-
-Another typical example is evenly distributing a group of tasks among the currently healthy processes. For example, if there are 60 tasks and 4 nodes, Helix assigns 15 tasks to each node. 
-When one node fails, Helix redistributes its 15 tasks to the remaining 3 nodes, resulting in a balanced 20 tasks per node. Similarly, if a node is added, Helix re-allocates 3 tasks from each of the 4 nodes to the 5th node, resulting in a balanced distribution of 12 tasks per node.. 
-
-#### SEMI_AUTO
-
-When the application needs to control the placement of the replicas, use the SEMI_AUTO rebalance mode.
-
-Example: In the ideal state below, the partition \'MyResource_0\' is constrained to be placed only on node1 or node2.  The choice of _state_ is still controlled by Helix.  That means MyResource_0.MASTER could be on node1 and MyResource_0.SLAVE on node2, or vice-versa but neither would be placed on node3.
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "REBALANCE_MODE" : "SEMI_AUTO",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  }
-  "listFields" : {
-    "MyResource_0" : [node1, node2],
-    "MyResource_1" : [node2, node3],
-    "MyResource_2" : [node3, node1]
-  },
-  "mapFields" : {
-  }
-}
-```
-
-The MasterSlave state model requires that a partition has exactly one MASTER at all times, and the other replicas should be SLAVEs.  In this simple example with 2 replicas per partition, there would be one MASTER and one SLAVE.  Upon failover, a SLAVE has to assume mastership, and a new SLAVE will be generated.
-
-In this mode when node1 fails, unlike in FULL_AUTO mode the partition is _not_ moved from node1 to node3. Instead, Helix will decide to change the state of MyResource_0 on node2 from SLAVE to MASTER, based on the system constraints. 
-
-#### CUSTOMIZED
-
-Helix offers a third mode called CUSTOMIZED, in which the application controls the placement _and_ state of each replica. The application needs to implement a callback interface that Helix invokes when the cluster state changes. 
-Within this callback, the application can recompute the idealstate. Helix will then issue appropriate transitions such that _Idealstate_ and _Currentstate_ converges.
-
-Here\'s an example, again with 3 partitions, 2 replicas per partition, and the MasterSlave state model:
-
-```
-{
-  "id" : "MyResource",
-  "simpleFields" : {
-    "REBALANCE_MODE" : "CUSTOMIZED",
-    "NUM_PARTITIONS" : "3",
-    "REPLICAS" : "2",
-    "STATE_MODEL_DEF_REF" : "MasterSlave",
-  },
-  "mapFields" : {
-    "MyResource_0" : {
-      "N1" : "MASTER",
-      "N2" : "SLAVE",
-    },
-    "MyResource_1" : {
-      "N2" : "MASTER",
-      "N3" : "SLAVE",
-    },
-    "MyResource_2" : {
-      "N3" : "MASTER",
-      "N1" : "SLAVE",
-    }
-  }
-}
-```
-
-Suppose the current state of the system is 'MyResource_0' -> {N1:MASTER, N2:SLAVE} and the application changes the ideal state to 'MyResource_0' -> {N1:SLAVE,N2:MASTER}. While the application decides which node is MASTER and which is SLAVE, Helix will not blindly issue MASTER-->SLAVE to N1 and SLAVE-->MASTER to N2 in parallel, since that might result in a transient state where both N1 and N2 are masters, which violates the MasterSlave constraint that there is exactly one MASTER at a time.  Helix will first issue MASTER-->SLAVE to N1 and after it is completed, it will issue SLAVE-->MASTER to N2. 
-
-#### USER_DEFINED
-
-For maximum flexibility, Helix exposes an interface that can allow applications to plug in custom rebalancing logic. By providing the name of a class that implements the Rebalancer interface, Helix will automatically call the contained method whenever there is a change to the live participants in the cluster. For more, see [User-Defined Rebalancer](./tutorial_user_def_rebalancer.html).
-
-#### Backwards Compatibility
-
-In previous versions, FULL_AUTO was called AUTO_REBALANCE and SEMI_AUTO was called AUTO. Furthermore, they were presented as the IDEAL_STATE_MODE. Helix supports both IDEAL_STATE_MODE and REBALANCE_MODE, but IDEAL_STATE_MODE is now deprecated and may be phased out in future versions.

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_spectator.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_spectator.md b/src/site/markdown/tutorial_spectator.md
deleted file mode 100644
index 24c1cf4..0000000
--- a/src/site/markdown/tutorial_spectator.md
+++ /dev/null
@@ -1,76 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Spectator</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Spectator
-
-Next, we\'ll learn how to implement a Spectator.  Typically, a spectator needs to react to changes within the distributed system.  Examples: a client that needs to know where to send a request, a topic consumer in a consumer group.  The spectator is automatically informed of changes in the _external state_ of the cluster, but it does not have to add any code to keep track of other components in the system.
-
-### Start the Helix agent
-
-Same as for a Participant, The Helix agent is the common component that connects each system component with the controller.
-
-It requires the following parameters:
-
-* clusterName: A logical name to represent the group of nodes
-* instanceName: A logical name of the process creating the manager instance. Generally this is host:port.
-* instanceType: Type of the process. This can be one of the following types, in this case, use SPECTATOR:
-    * CONTROLLER: Process that controls the cluster, any number of controllers can be started but only one will be active at any given time.
-    * PARTICIPANT: Process that performs the actual task in the distributed system.
-    * SPECTATOR: Process that observes the changes in the cluster.
-    * ADMIN: To carry out system admin actions.
-* zkConnectString: Connection string to Zookeeper. This is of the form host1:port1,host2:port2,host3:port3.
-
-After the Helix manager instance is created, only thing that needs to be registered is the listener.  When the ExternalView changes, the listener is notified.
-
-### Spectator Code
-
-A spectator observes the cluster and is notified when the state of the system changes. Helix consolidates the state of entire cluster in one Znode called ExternalView.
-Helix provides a default implementation RoutingTableProvider that caches the cluster state and updates it when there is a change in the cluster.
-
-```
-manager = HelixManagerFactory.getZKHelixManager(clusterName,
-                                                          instanceName,
-                                                          InstanceType.PARTICIPANT,
-                                                          zkConnectString);
-manager.connect();
-RoutingTableProvider routingTableProvider = new RoutingTableProvider();
-manager.addExternalViewChangeListener(routingTableProvider);
-```
-
-In the following code snippet, the application sends the request to a valid instance by interrogating the external view.  Suppose the desired resource for this request is in the partition myDB_1.
-
-```
-## instances = routingTableProvider.getInstances(, "PARTITION_NAME", "PARTITION_STATE");
-instances = routingTableProvider.getInstances("myDB", "myDB_1", "ONLINE");
-
-////////////////////////////////////////////////////////////////////////////////////////////////
-// Application-specific code to send a request to one of the instances                        //
-////////////////////////////////////////////////////////////////////////////////////////////////
-
-theInstance = instances.get(0);  // should choose an instance and throw an exception if none are available
-result = theInstance.sendRequest(yourApplicationRequest, responseObject);
-
-```
-
-When the external view changes, the application needs to react by sending requests to a different instance.  
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_state.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_state.md b/src/site/markdown/tutorial_state.md
deleted file mode 100644
index 4f7b1b5..0000000
--- a/src/site/markdown/tutorial_state.md
+++ /dev/null
@@ -1,131 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - State Machine Configuration</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): State Machine Configuration
-
-In this chapter, we\'ll learn about the state models provided by Helix, and how to create your own custom state model.
-
-## State Models
-
-Helix comes with 3 default state models that are commonly used.  It is possible to have multiple state models in a cluster. 
-Every resource that is added should be configured to use a state model that govern its _ideal state_.
-
-### MASTER-SLAVE
-
-* 3 states: OFFLINE, SLAVE, MASTER
-* Maximum number of masters: 1
-* Slaves are based on the replication factor. The replication factor can be specified while adding the resource.
-
-
-### ONLINE-OFFLINE
-
-* Has 2 states: OFFLINE and ONLINE.  This simple state model is a good starting point for most applications.
-
-### LEADER-STANDBY
-
-* 1 Leader and multiple stand-bys.  The idea is that exactly one leader accomplishes a designated task, the stand-bys are ready to take over if the leader fails.
-
-## Constraints
-
-In addition to the state machine configuration, one can specify the constraints of states and transitions.
-
-For example, one can say:
-
-* MASTER:1
-<br/>Maximum number of replicas in MASTER state at any time is 1
-
-* OFFLINE-SLAVE:5 
-<br/>Maximum number of OFFLINE-SLAVE transitions that can happen concurrently in the system is 5 in this example.
-
-### Dynamic State Constraints
-
-We also support two dynamic upper bounds for the number of replicas in each state:
-
-* N: The number of replicas in the state is at most the number of live participants in the cluster
-* R: The number of replicas in the state is at most the specified replica count for the partition
-
-### State Priority
-
-Helix uses a greedy approach to satisfy the state constraints. For example, if the state machine configuration says it needs 1 MASTER and 2 SLAVES, but only 1 node is active, Helix must promote it to MASTER. This behavior is achieved by providing the state priority list as \[MASTER, SLAVE\].
-
-### State Transition Priority
-
-Helix tries to fire as many transitions as possible in parallel to reach the stable state without violating constraints. By default, Helix simply sorts the transitions alphabetically and fires as many as it can without violating the constraints. You can control this by overriding the priority order.
-
-## Special States
-
-### DROPPED
-
-The DROPPED state is used to signify a replica that was served by a given participant, but is no longer served. This allows Helix and its participants to effectively clean up. There are two requirements that every new state model should follow with respect to the DROPPED state:
-
-* The DROPPED state must be defined
-* There must be a path to DROPPED for every state in the model
-
-### ERROR
-
-The ERROR state is used whenever the participant serving a partition encountered an error and cannot continue to serve the partition. HelixAdmin has \"reset\" functionality to allow for participants to recover from the ERROR state.
-
-## Annotated Example
-
-Below is a complete definition of a Master-Slave state model. Notice the fields marked REQUIRED; these are essential for any state model definition.
-
-```
-StateModelDefinition stateModel = new StateModelDefinition.Builder("MasterSlave")
-  // OFFLINE is the state that the system starts in (initial state is REQUIRED)
-  .initialState("OFFLINE")
-
-  // Lowest number here indicates highest priority, no value indicates lowest priority
-  .addState("MASTER", 1)
-  .addState("SLAVE", 2)
-  .addState("OFFLINE")
-
-  // Note the special inclusion of the DROPPED state (REQUIRED)
-  .addState(HelixDefinedState.DROPPED.toString())
-
-  // No more than one master allowed
-  .upperBound("MASTER", 1)
-
-  // R indicates an upper bound of number of replicas for each partition
-  .dynamicUpperBound("SLAVE", "R")
-
-  // Add some high-priority transitions
-  .addTransition("SLAVE", "MASTER", 1)
-  .addTransition("OFFLINE", "SLAVE", 2)
-
-  // Using the same priority value indicates that these transitions can fire in any order
-  .addTransition("MASTER", "SLAVE", 3)
-  .addTransition("SLAVE", "OFFLINE", 3)
-
-  // Not specifying a value defaults to lowest priority
-  // Notice the inclusion of the OFFLINE to DROPPED transition
-  // Since every state has a path to OFFLINE, they each now have a path to DROPPED (REQUIRED)
-  .addTransition("OFFLINE", HelixDefinedState.DROPPED.toString())
-
-  // Create the StateModelDefinition instance
-  .build();
-
-  // Use the isValid() function to make sure the StateModelDefinition will work without issues
-  Assert.assertTrue(stateModel.isValid());
-```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_throttling.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_throttling.md b/src/site/markdown/tutorial_throttling.md
deleted file mode 100644
index 2317cf1..0000000
--- a/src/site/markdown/tutorial_throttling.md
+++ /dev/null
@@ -1,38 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - Throttling</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): Throttling
-
-In this chapter, we\'ll learn how to control the parallel execution of cluster tasks.  Only a centralized cluster manager with global knowledge is capable of coordinating this decision.
-
-### Throttling
-
-Since all state changes in the system are triggered through transitions, Helix can control the number of transitions that can happen in parallel. Some of the transitions may be light weight, but some might involve moving data, which is quite expensive from a network and iops perspective.
-
-Helix allows applications to set a threshold on transitions. The threshold can be set at multiple scopes:
-
-* MessageType e.g STATE_TRANSITION
-* TransitionType e.g SLAVE-MASTER
-* Resource e.g database
-* Node i.e per-node maximum transitions in parallel
-

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_user_def_rebalancer.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_user_def_rebalancer.md b/src/site/markdown/tutorial_user_def_rebalancer.md
deleted file mode 100644
index 44b202a..0000000
--- a/src/site/markdown/tutorial_user_def_rebalancer.md
+++ /dev/null
@@ -1,201 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - User-Defined Rebalancing</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): User-Defined Rebalancing
-
-Even though Helix can compute both the location and the state of replicas internally using a default fully-automatic rebalancer, specific applications may require rebalancing strategies that optimize for different requirements. Thus, Helix allows applications to plug in arbitrary rebalancer algorithms that implement a provided interface. One of the main design goals of Helix is to provide maximum flexibility to any distributed application. Thus, it allows applications to fully implement the rebalancer, which is the core constraint solver in the system, if the application developer so chooses.
-
-Whenever the state of the cluster changes, as is the case when participants join or leave the cluster, Helix automatically calls the rebalancer to compute a new mapping of all the replicas in the resource. When using a pluggable rebalancer, the only required step is to register it with Helix. Subsequently, no additional bootstrapping steps are necessary. Helix uses reflection to look up and load the class dynamically at runtime. As a result, it is also technically possible to change the rebalancing strategy used at any time.
-
-The Rebalancer interface is as follows:
-
-```
-ResourceMapping computeResourceMapping(final Resource resource,
-      final IdealState currentIdealState, final CurrentStateOutput currentStateOutput,
-      final ClusterDataCache clusterData);
-```
-The first parameter is the resource to rebalance, the second is pre-existing ideal mappings, the third is a snapshot of the actual placements and state assignments, and the fourth is a full cache of all of the cluster data available to Helix. Internally, Helix implements the same interface for its own rebalancing routines, so a user-defined rebalancer will be cognizant of the same information about the cluster as an internal implementation. Helix strives to provide applications the ability to implement algorithms that may require a large portion of the entire state of the cluster to make the best placement and state assignment decisions possible.
-
-A ResourceMapping is a full representation of the location and the state of each replica of each partition of a given resource. This is a simple representation of the placement that the algorithm believes is the best possible. If the placement meets all defined constraints, this is what will become the actual state of the distributed system.
-
-### Specifying a Rebalancer
-For implementations that set up the cluster through existing code, the following HelixAdmin calls will update the Rebalancer class:
-
-```
-IdealState idealState = helixAdmin.getResourceIdealState(clusterName, resourceName);
-idealState.setRebalanceMode(RebalanceMode.USER_DEFINED);
-idealState.setRebalancerClassName(className);
-helixAdmin.setResourceIdealState(clusterName, resourceName, idealState);
-```
-There are two key fields to set to specify that a pluggable rebalancer should be used. First, the rebalance mode should be set to USER_DEFINED, and second the rebalancer class name should be set to a class that implements Rebalancer and is within the scope of the project. The class name is a fully-qualified class name consisting of its package and its name. Without specification of the USER_DEFINED mode, the user-defined rebalancer class will not be used even if specified. Furthermore, Helix will not attempt to rebalance the resources through its standard routines if its mode is USER_DEFINED, regardless of whether or not a rebalancer class is registered.
-
-Alternatively, the rebalancer class name can be specified in a YAML file representing the cluster configuration. The requirements are the same, but the representation is more compact. Below are the first few lines of an example YAML file. To see a full YAML specification, see the [YAML tutorial](./tutorial_yaml.html).
-
-```
-clusterName: lock-manager-custom-rebalancer # unique name for the cluster
-resources:
-  - name: lock-group # unique resource name
-    rebalancer: # we will provide our own rebalancer
-      mode: USER_DEFINED
-      class: domain.project.helix.rebalancer.UserDefinedRebalancerClass
-...
-```
-
-### Example
-We demonstrate plugging in a simple user-defined rebalancer as part of a revisit of the [distributed lock manager](./recipes/user_def_rebalancer.html) example. It includes a functional Rebalancer implementation, as well as the entire YAML file used to define the cluster.
-
-Consider the case where partitions are locks in a lock manager and 6 locks are to be distributed evenly to a set of participants, and only one participant can hold each lock. We can define a rebalancing algorithm that simply takes the modulus of the lock number and the number of participants to evenly distribute the locks across participants. Helix allows capping the number of partitions a participant can accept, but since locks are lightweight, we do not need to define a restriction in this case. The following is a succinct implementation of this algorithm.
-
-```
-@Override
-public ResourceAssignment computeResourceMapping(Resource resource, IdealState currentIdealState,
-    CurrentStateOutput currentStateOutput, ClusterDataCache clusterData) {
-  // Initialize an empty mapping of locks to participants
-  ResourceAssignment assignment = new ResourceAssignment(resource.getResourceName());
-
-  // Get the list of live participants in the cluster
-  List<String> liveParticipants = new ArrayList<String>(clusterData.getLiveInstances().keySet());
-
-  // Get the state model (should be a simple lock/unlock model) and the highest-priority state
-  String stateModelName = currentIdealState.getStateModelDefRef();
-  StateModelDefinition stateModelDef = clusterData.getStateModelDef(stateModelName);
-  if (stateModelDef.getStatesPriorityList().size() < 1) {
-    LOG.error("Invalid state model definition. There should be at least one state.");
-    return assignment;
-  }
-  String lockState = stateModelDef.getStatesPriorityList().get(0);
-
-  // Count the number of participants allowed to lock each lock
-  String stateCount = stateModelDef.getNumInstancesPerState(lockState);
-  int lockHolders = 0;
-  try {
-    // a numeric value is a custom-specified number of participants allowed to lock the lock
-    lockHolders = Integer.parseInt(stateCount);
-  } catch (NumberFormatException e) {
-    LOG.error("Invalid state model definition. The lock state does not have a valid count");
-    return assignment;
-  }
-
-  // Fairly assign the lock state to the participants using a simple mod-based sequential
-  // assignment. For instance, if each lock can be held by 3 participants, lock 0 would be held
-  // by participants (0, 1, 2), lock 1 would be held by (1, 2, 3), and so on, wrapping around the
-  // number of participants as necessary.
-  // This assumes a simple lock-unlock model where the only state of interest is which nodes have
-  // acquired each lock.
-  int i = 0;
-  for (Partition partition : resource.getPartitions()) {
-    Map<String, String> replicaMap = new HashMap<String, String>();
-    for (int j = i; j < i + lockHolders; j++) {
-      int participantIndex = j % liveParticipants.size();
-      String participant = liveParticipants.get(participantIndex);
-      // enforce that a participant can only have one instance of a given lock
-      if (!replicaMap.containsKey(participant)) {
-        replicaMap.put(participant, lockState);
-      }
-    }
-    assignment.addReplicaMap(partition, replicaMap);
-    i++;
-  }
-  return assignment;
-}
-```
-
-Here is the ResourceMapping emitted by the user-defined rebalancer for a 3-participant system whenever there is a change to the set of participants.
-
-* Participant_A joins
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_A": "LOCKED"},
-  "lock_2": { "Participant_A": "LOCKED"},
-  "lock_3": { "Participant_A": "LOCKED"},
-  "lock_4": { "Participant_A": "LOCKED"},
-  "lock_5": { "Participant_A": "LOCKED"},
-}
-```
-
-A ResourceMapping is a mapping for each resource of partition to the participant serving each replica and the state of each replica. The state model is a simple LOCKED/RELEASED model, so participant A holds all lock partitions in the LOCKED state.
-
-* Participant_B joins
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_B": "LOCKED"},
-  "lock_2": { "Participant_A": "LOCKED"},
-  "lock_3": { "Participant_B": "LOCKED"},
-  "lock_4": { "Participant_A": "LOCKED"},
-  "lock_5": { "Participant_B": "LOCKED"},
-}
-```
-
-Now that there are two participants, the simple mod-based function assigns every other lock to the second participant. On any system change, the rebalancer is invoked so that the application can define how to redistribute its resources.
-
-* Participant_C joins (steady state)
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_B": "LOCKED"},
-  "lock_2": { "Participant_C": "LOCKED"},
-  "lock_3": { "Participant_A": "LOCKED"},
-  "lock_4": { "Participant_B": "LOCKED"},
-  "lock_5": { "Participant_C": "LOCKED"},
-}
-```
-
-This is the steady state of the system. Notice that four of the six locks now have a different owner. That is because of the naïve modulus-based assignmemt approach used by the user-defined rebalancer. However, the interface is flexible enough to allow you to employ consistent hashing or any other scheme if minimal movement is a system requirement.
-
-* Participant_B fails
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_C": "LOCKED"},
-  "lock_2": { "Participant_A": "LOCKED"},
-  "lock_3": { "Participant_C": "LOCKED"},
-  "lock_4": { "Participant_A": "LOCKED"},
-  "lock_5": { "Participant_C": "LOCKED"},
-}
-```
-
-On any node failure, as in the case of node addition, the rebalancer is invoked automatically so that it can generate a new mapping as a response to the change. Helix ensures that the Rebalancer has the opportunity to reassign locks as required by the application.
-
-* Participant_B (or the replacement for the original Participant_B) rejoins
-
-```
-{
-  "lock_0": { "Participant_A": "LOCKED"},
-  "lock_1": { "Participant_B": "LOCKED"},
-  "lock_2": { "Participant_C": "LOCKED"},
-  "lock_3": { "Participant_A": "LOCKED"},
-  "lock_4": { "Participant_B": "LOCKED"},
-  "lock_5": { "Participant_C": "LOCKED"},
-}
-```
-
-The rebalancer was invoked once again and the resulting ResourceMapping reflects the steady state.
-
-### Caveats
-- The rebalancer class must be available at runtime, or else Helix will not attempt to rebalance at all
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/markdown/tutorial_yaml.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/tutorial_yaml.md b/src/site/markdown/tutorial_yaml.md
deleted file mode 100644
index 0f8e0cc..0000000
--- a/src/site/markdown/tutorial_yaml.md
+++ /dev/null
@@ -1,102 +0,0 @@
-<!---
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<head>
-  <title>Tutorial - YAML Cluster Setup</title>
-</head>
-
-# [Helix Tutorial](./Tutorial.html): YAML Cluster Setup
-
-As an alternative to using Helix Admin to set up the cluster, its resources, constraints, and the state model, Helix supports bootstrapping a cluster configuration based on a YAML file. Below is an annotated example of such a file for a simple distributed lock manager where a lock can only be LOCKED or RELEASED, and each lock only allows a single participant to hold it in the LOCKED state.
-
-```
-clusterName: lock-manager-custom-rebalancer # unique name for the cluster (required)
-resources:
-  - name: lock-group # unique resource name (required)
-    rebalancer: # required
-      mode: USER_DEFINED # required - USER_DEFINED means we will provide our own rebalancer
-      class: org.apache.helix.userdefinedrebalancer.LockManagerRebalancer # required for USER_DEFINED
-    partitions:
-      count: 12 # number of partitions for the resource (default is 1)
-      replicas: 1 # number of replicas per partition (default is 1)
-    stateModel:
-      name: lock-unlock # model name (required)
-      states: [LOCKED, RELEASED, DROPPED] # the list of possible states (required if model not built-in)
-      transitions: # the list of possible transitions (required if model not built-in)
-        - name: Unlock
-          from: LOCKED
-          to: RELEASED
-        - name: Lock
-          from: RELEASED
-          to: LOCKED
-        - name: DropLock
-          from: LOCKED
-          to: DROPPED
-        - name: DropUnlock
-          from: RELEASED
-          to: DROPPED
-        - name: Undrop
-          from: DROPPED
-          to: RELEASED
-      initialState: RELEASED # (required if model not built-in)
-    constraints:
-      state:
-        counts: # maximum number of replicas of a partition that can be in each state (required if model not built-in)
-          - name: LOCKED
-            count: "1"
-          - name: RELEASED
-            count: "-1"
-          - name: DROPPED
-            count: "-1"
-        priorityList: [LOCKED, RELEASED, DROPPED] # states in order of priority (all priorities equal if not specified)
-      transition: # transitions priority to enforce order that transitions occur
-        priorityList: [Unlock, Lock, Undrop, DropUnlock, DropLock] # all priorities equal if not specified
-participants: # list of nodes that can serve replicas (optional if dynamic joining is active, required otherwise)
-  - name: localhost_12001
-    host: localhost
-    port: 12001
-  - name: localhost_12002
-    host: localhost
-    port: 12002
-  - name: localhost_12003
-    host: localhost
-    port: 12003
-```
-
-Using a file like the one above, the cluster can be set up either with the command line:
-
-```
-incubator-helix/helix-core/target/helix-core/pkg/bin/YAMLClusterSetup.sh localhost:2199 lock-manager-config.yaml
-```
-
-or with code:
-
-```
-YAMLClusterSetup setup = new YAMLClusterSetup(zkAddress);
-InputStream input =
-    Thread.currentThread().getContextClassLoader()
-        .getResourceAsStream("lock-manager-config.yaml");
-YAMLClusterSetup.YAMLClusterConfig config = setup.setupCluster(input);
-```
-
-Some notes:
-
-- A rebalancer class is only required for the USER_DEFINED mode. It is ignored otherwise.
-
-- Built-in state models, like OnlineOffline, LeaderStandby, and MasterSlave, or state models that have already been added only require a name for stateModel. If partition and/or replica counts are not provided, a value of 1 is assumed.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/resources/images/PFS-Generic.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/PFS-Generic.png b/src/site/resources/images/PFS-Generic.png
deleted file mode 100644
index 7eea3a0..0000000
Binary files a/src/site/resources/images/PFS-Generic.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/resources/images/RSYNC_BASED_PFS.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/RSYNC_BASED_PFS.png b/src/site/resources/images/RSYNC_BASED_PFS.png
deleted file mode 100644
index 0cc55ae..0000000
Binary files a/src/site/resources/images/RSYNC_BASED_PFS.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-helix/blob/150ce693/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
index 2f3ee77..2b4a64f 100644
--- a/src/site/site.xml
+++ b/src/site/site.xml
@@ -25,7 +25,8 @@
     <href>http://incubator.apache.org/</href>
   </bannerRight>
 
-  <publishDate position="right"/>
+  <publishDate position="none"/>
+  <version position="none"/>
 
   <skin>
     <groupId>org.apache.maven.skins</groupId>
@@ -56,27 +57,28 @@
       <item name="Apache Helix" href="http://helix.incubator.apache.org/"/>
     </breadcrumbs>
 
-    <menu name="Helix">
+    <menu name="Apache Helix">
       <item name="Introduction" href="./index.html"/>
       <item name="Core concepts" href="./Concepts.html"/>
       <item name="Architecture" href="./Architecture.html"/>
-      <item name="Quick Start" href="./Quickstart.html"/>
-      <item name="Tutorial" href="./Tutorial.html"/>
-      <item name="release ${currentRelease}" href="releasenotes/release-${currentRelease}.html"/>
-      <item name="Download" href="./download.html"/>
-      <item name="IRC" href="./IRC.html"/>
+      <item name="Publications" href="./Publications.html"/>
+    </menu>
+
+    <menu name="Helix 0.7.0-incubating">
+      <item name="Quick Start" href="./site-releases/0.7.0-incubating-site/Quickstart.html"/>
+      <item name="Tutorial" href="./site-releases/0.7.0-incubating-site/Tutorial.html"/>
+      <item name="Download" href="./site-releases/0.7.0-incubating-site/download.html"/>
     </menu>
 
-    <menu name="Recipes">
-      <item name="Distributed lock manager" href="./recipes/lock_manager.html"/>
-      <item name="Rabbit MQ consumer group" href="./recipes/rabbitmq_consumer_group.html"/>
-      <item name="Rsync replicated file store" href="./recipes/rsync_replicated_file_store.html"/>
-      <item name="Service Discovery" href="./recipes/service_discovery.html"/>
-      <item name="Distributed task DAG Execution" href="./recipes/task_dag_execution.html"/>
-      <item name="User-Defined Rebalancer Example" href="./recipes/user_def_rebalancer.html"/>
+    <menu name="Releases">
+      <item name="0.7.0-incubating" href="./site-releases/0.7.0-incubating-site/index.html"/>
+      <item name="0.6.2-incubating" href="./site-releases/0.6.2-incubating-site/index.html"/>
+      <item name="0.6.1-incubating" href="./site-releases/0.6.1-incubating-site/index.html"/>
+      <item name="trunk" href="./site-releases/trunk-site/index.html"/>
     </menu>
 
     <menu name="Get Involved">
+      <item name="IRC" href="./IRC.html"/>
       <item name="Mailing Lists" href="mail-lists.html"/>
       <item name="Issues" href="issue-tracking.html"/>
       <item name="Team" href="team-list.html"/>
@@ -109,14 +111,14 @@
   <custom>
     <fluidoSkin>
       <topBarEnabled>true</topBarEnabled>
-      <!-- twitter link work only with sidebar disabled -->
-      <sideBarEnabled>false</sideBarEnabled>
       <googleSearch></googleSearch>
       <twitter>
         <user>ApacheHelix</user>
         <showUser>true</showUser>
         <showFollowers>false</showFollowers>
       </twitter>
+      <!-- twitter link work only with sidebar disabled -->
+      <sideBarEnabled>true</sideBarEnabled>
     </fluidoSkin>
   </custom>