You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@distributedlog.apache.org by si...@apache.org on 2016/09/13 07:34:34 UTC

[16/23] incubator-distributedlog git commit: DL-3: Move distributedlog website to apache

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/fonts/bootstrap/glyphicons-halflings-regular.woff
----------------------------------------------------------------------
diff --git a/docs/fonts/bootstrap/glyphicons-halflings-regular.woff b/docs/fonts/bootstrap/glyphicons-halflings-regular.woff
new file mode 100755
index 0000000..9e61285
Binary files /dev/null and b/docs/fonts/bootstrap/glyphicons-halflings-regular.woff differ

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/fonts/bootstrap/glyphicons-halflings-regular.woff2
----------------------------------------------------------------------
diff --git a/docs/fonts/bootstrap/glyphicons-halflings-regular.woff2 b/docs/fonts/bootstrap/glyphicons-halflings-regular.woff2
new file mode 100755
index 0000000..64539b5
Binary files /dev/null and b/docs/fonts/bootstrap/glyphicons-halflings-regular.woff2 differ

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/globalreplicatedlog/main.rst
----------------------------------------------------------------------
diff --git a/docs/globalreplicatedlog/main.rst b/docs/globalreplicatedlog/main.rst
deleted file mode 100644
index 1e554bd..0000000
--- a/docs/globalreplicatedlog/main.rst
+++ /dev/null
@@ -1,99 +0,0 @@
-Global Replicated Log
-=====================
-
-A typical setup for DistributedLog is within a datacenter. But a global setup is required for
-providing global replicated logs for distributed key/value store to achieve strong consistency
-across multiple datacenters. `Global` here means across datacenters, which is different from
-`Local` meaning within a datacenter.
-
-A global setup of DistributedLog is organized as a set of `regions`, where each region is the
-rough analog of a local setup. Regions are the unit of administrative deployment. The set of
-regions is also the set of locations across which data can be replicated. Regions can be added
-to or removed from a running system as new datacenters are brought into service and old ones
-are turned off, respectively. Regions are also the unit of physical isolation: there may be one
-or more regions in a datacenter if they have isolated power or network supplies.
-
-.. figure:: ../images/globalreplicatedlog.png
-   :align: center
-
-   Figure 1. Global Replicated Log
-
-Figure 1 illustrates the servers in a `Global Replicated Log` setup. There is no inter datacenter
-communication between write proxies or log segment storage nodes. The only component that does
-inter datacenters communications within its hosts is the \u201cGlobal\u201d metadata store, which is a global
-setup of ZooKeeper. Write clients will talk to the write proxies in its local region to bootstrap
-the ownership cache and redirect to correct write proxies in other regions through direct TCP
-connections. While readers will identify the regions of the log segment storage nodes according to
-the `region aware` placement policy, and try reading from local region at most of the time and
-speculatively try on remote regions.
-
-Region Aware Data Placement Policy
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Region aware placement policy uses hierarchical allocation where-in nodes are allocated so that data
-is spread uniformly across the available regions and within each region it uses the `rack-aware`
-placement policy to spread the data uniformly across the available racks.
-
-Region aware placement policy is governed by a parameter ensures that the ack quorum covers at least
-*minRegionsForDurability* distinct regions. This ensures that the system can survive the failure of
-`(totalRegions - minRegionsForDurability)` regions without loss of availability. For example if we
-have bookie nodes in *5* regions and if the *minRegionsForDurability* is *3* then we can survive the
-failure of `(5 - 3) = 2` regions.
-
-The placement algorithm follows the following simple invariant:
-
-::
-
-    There is no combination of nodes that would satisfy the ack quorum with
-    less than "minRegionsForDurability" responses.
-
-
-This invariant ensures that enforcing ack quorum is sufficient to enforce that the entry has been made durable
-in *minRegionsForDurability* regions.
-
-The *minRegionsForDurability* requirement enforces constraints on the minimum ack quorum as we want to ensure
-that when we run in degraded mode - *i.e. when only a subset of the regions are available* - we would still not
-be able to allocate nodes in such a way that the ack quorum would be satisfied by fewer than *minRegionsForDurability*
-regions.
-
-For instance consider the following scenario with three regions each containing 20 bookie nodes:
-
-::
-
-    minRegionsForDurability = 2
-    ensemble size = write quorum = 15
-    ack quorum =  8
-
-
-Let\u2019s say that one of the regions is currently unavailable and we want to still ensure that writes can continue.
-The ensemble placement may then have to choose bookies from the two available regions. Given that *15* bookies have
-to be allocated, we will have to allocate at least *8* bookies from one of the remaining regions - but with ack quorum
-of *8* we run the risk of satisfying ack quorum with bookies from a single region. Therefore we must require that
-the ack quorum is greater than *8*.
-
-Cross Region Speculative Reads
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-As discussed before, read requests can be satisfied by any replica of the data, however for high availability
-speculative requests are sent to multiple copies to ensure that at least one of the requests returns within
-the time specified by the *SLA*. The reader consults the data placement policy to get the list of replicas that
-can satisfy the request in the order of preference. This order is decided as follows:
-
-* The first node in the list is always the bookie node that is closest to the client - if more than one such nodes exist, one is chosen at random.
-* The second node is usually the closest node in a different failure domain. In the case of a two level hierarchy that would be a node in a different rack.
-* The third node is chosen from a different region
-
-The delay between successive speculative read requests ensures that the probability of sending the *nth*
-speculative read request decays exponentially with *n*. This ensures that the number of requests that go to
-farther nodes is still kept at a minimum. However by making sure that we cross failure domains in the first
-few speculative requests improves fault-tolerance of the reader. Transient node failures are transparently
-handled by the reader by this simple and generalizable speculative read policy. This can be thought of as
-the most granular form of failover where each request essentially fails-over to an alternate node if the
-primary node it attempted to access is unavailable. In practice we have found this to also better handle
-network congestion where routes between specific pairs of nodes may become unavailable without necessarily
-making the nodes completely inaccessible.
-
-In addition to static decisions based on the location of the bookie nodes, we can also make dynamic decisions
-based on observed latency or failure rates from specific bookies. These statistics are tracked by the bookie
-client and are used to influence the order in which speculative read requests are scheduled. This again is
-able to capture partial network outages that affect specific routes within the network. 

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/images/distributedlog_logo_l.png
----------------------------------------------------------------------
diff --git a/docs/images/distributedlog_logo_l.png b/docs/images/distributedlog_logo_l.png
new file mode 100644
index 0000000..2af136d
Binary files /dev/null and b/docs/images/distributedlog_logo_l.png differ

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/images/distributedlog_logo_navbar.png
----------------------------------------------------------------------
diff --git a/docs/images/distributedlog_logo_navbar.png b/docs/images/distributedlog_logo_navbar.png
new file mode 100644
index 0000000..6fdce90
Binary files /dev/null and b/docs/images/distributedlog_logo_navbar.png differ

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/images/distributedlog_logo_s.png
----------------------------------------------------------------------
diff --git a/docs/images/distributedlog_logo_s.png b/docs/images/distributedlog_logo_s.png
new file mode 100644
index 0000000..5e97432
Binary files /dev/null and b/docs/images/distributedlog_logo_s.png differ

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/images/favicon.ico
----------------------------------------------------------------------
diff --git a/docs/images/favicon.ico b/docs/images/favicon.ico
new file mode 100644
index 0000000..e1ee3ec
Binary files /dev/null and b/docs/images/favicon.ico differ

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/implementation/core.rst
----------------------------------------------------------------------
diff --git a/docs/implementation/core.rst b/docs/implementation/core.rst
deleted file mode 100644
index e69de29..0000000

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/implementation/main.rst
----------------------------------------------------------------------
diff --git a/docs/implementation/main.rst b/docs/implementation/main.rst
deleted file mode 100644
index c2b6b5f..0000000
--- a/docs/implementation/main.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-Implementation
-==============
-
-.. toctree::
-   :maxdepth: 1
-
-   storage
-   core
-   writeproxy

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/implementation/storage.rst
----------------------------------------------------------------------
diff --git a/docs/implementation/storage.rst b/docs/implementation/storage.rst
deleted file mode 100644
index ed2bba1..0000000
--- a/docs/implementation/storage.rst
+++ /dev/null
@@ -1,313 +0,0 @@
-Storage
-=======
-
-This describes some implementation details of storage layer.
-
-Ensemble Placement Policy
--------------------------
-
-`EnsemblePlacementPolicy` encapsulates the algorithm that bookkeeper client uses to select a number of bookies from the
-cluster as an ensemble for storing data. The algorithm is typically based on the data input as well as the network
-topology properties.
-
-By default, BookKeeper offers a `RackawareEnsemblePlacementPolicy` for placing the data across racks within a
-datacenter, and a `RegionAwareEnsemblePlacementPolicy` for placing the data across multiple datacenters.
-
-How does EnsemblePlacementPolicy work?
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The interface of `EnsemblePlacementPolicy` is described as below.
-
-::
-
-    public interface EnsemblePlacementPolicy {
-
-        /**
-         * Initialize the policy.
-         *
-         * @param conf client configuration
-         * @param optionalDnsResolver dns resolver
-         * @param hashedWheelTimer timer
-         * @param featureProvider feature provider
-         * @param statsLogger stats logger
-         * @param alertStatsLogger stats logger for alerts
-         */
-        public EnsemblePlacementPolicy initialize(ClientConfiguration conf,
-                                                  Optional<DNSToSwitchMapping> optionalDnsResolver,
-                                                  HashedWheelTimer hashedWheelTimer,
-                                                  FeatureProvider featureProvider,
-                                                  StatsLogger statsLogger,
-                                                  AlertStatsLogger alertStatsLogger);
-
-        /**
-         * Uninitialize the policy
-         */
-        public void uninitalize();
-
-        /**
-         * A consistent view of the cluster (what bookies are available as writable, what bookies are available as
-         * readonly) is updated when any changes happen in the cluster.
-         *
-         * @param writableBookies
-         *          All the bookies in the cluster available for write/read.
-         * @param readOnlyBookies
-         *          All the bookies in the cluster available for readonly.
-         * @return the dead bookies during this cluster change.
-         */
-        public Set<BookieSocketAddress> onClusterChanged(Set<BookieSocketAddress> writableBookies,
-                                                         Set<BookieSocketAddress> readOnlyBookies);
-
-        /**
-         * Choose <i>numBookies</i> bookies for ensemble. If the count is more than the number of available
-         * nodes, {@link BKNotEnoughBookiesException} is thrown.
-         *
-         * @param ensembleSize
-         *          Ensemble Size
-         * @param writeQuorumSize
-         *          Write Quorum Size
-         * @param excludeBookies
-         *          Bookies that should not be considered as targets.
-         * @return list of bookies chosen as targets.
-         * @throws BKNotEnoughBookiesException if not enough bookies available.
-         */
-        public ArrayList<BookieSocketAddress> newEnsemble(int ensembleSize, int writeQuorumSize, int ackQuorumSize,
-                                                          Set<BookieSocketAddress> excludeBookies) throws BKNotEnoughBookiesException;
-
-        /**
-         * Choose a new bookie to replace <i>bookieToReplace</i>. If no bookie available in the cluster,
-         * {@link BKNotEnoughBookiesException} is thrown.
-         *
-         * @param bookieToReplace
-         *          bookie to replace
-         * @param excludeBookies
-         *          bookies that should not be considered as candidate.
-         * @return the bookie chosen as target.
-         * @throws BKNotEnoughBookiesException
-         */
-        public BookieSocketAddress replaceBookie(int ensembleSize, int writeQuorumSize, int ackQuorumSize,
-                                                 Collection<BookieSocketAddress> currentEnsemble, BookieSocketAddress bookieToReplace,
-                                                 Set<BookieSocketAddress> excludeBookies) throws BKNotEnoughBookiesException;
-
-        /**
-         * Reorder the read sequence of a given write quorum <i>writeSet</i>.
-         *
-         * @param ensemble
-         *          Ensemble to read entries.
-         * @param writeSet
-         *          Write quorum to read entries.
-         * @param bookieFailureHistory
-         *          Observed failures on the bookies
-         * @return read sequence of bookies
-         */
-        public List<Integer> reorderReadSequence(ArrayList<BookieSocketAddress> ensemble,
-                                                 List<Integer> writeSet, Map<BookieSocketAddress, Long> bookieFailureHistory);
-
-
-        /**
-         * Reorder the read last add confirmed sequence of a given write quorum <i>writeSet</i>.
-         *
-         * @param ensemble
-         *          Ensemble to read entries.
-         * @param writeSet
-         *          Write quorum to read entries.
-         * @param bookieFailureHistory
-         *          Observed failures on the bookies
-         * @return read sequence of bookies
-         */
-        public List<Integer> reorderReadLACSequence(ArrayList<BookieSocketAddress> ensemble,
-                                                List<Integer> writeSet, Map<BookieSocketAddress, Long> bookieFailureHistory);
-    }
-
-The methods in this interface covers three parts - 1) initialization and uninitialization; 2) how to choose bookies to
-place data; and 3) how to choose bookies to do speculative reads.
-
-Initialization and uninitialization
-___________________________________
-
-The ensemble placement policy is constructed by jvm reflection during constructing bookkeeper client. After the
-`EnsemblePlacementPolicy` is constructed, bookkeeper client will call `#initialize` to initialize the placement policy.
-
-The `#initialize` method takes a few resources from bookkeeper for instantiating itself. These resources include:
-
-1. `ClientConfiguration` : The client configuration that used for constructing the bookkeeper client. The implementation of the placement policy could obtain its settings from this configuration.
-2. `DNSToSwitchMapping`: The DNS resolver for the ensemble policy to build the network topology of the bookies cluster. It is optional.
-3. `HashedWheelTimer`: A hashed wheel timer that could be used for timing related work. For example, a stabilize network topology could use it to delay network topology changes to reduce impacts of flapping bookie registrations due to zk session expires.
-4. `FeatureProvider`: A feature provider that the policy could use for enabling or disabling its offered features. For example, a region-aware placement policy could offer features to disable placing data to a specific region at runtime.
-5. `StatsLogger`: A stats logger for exposing stats.
-6. `AlertStatsLogger`: An alert stats logger for exposing critical stats that needs to be alerted.
-
-The ensemble placement policy is a single instance per bookkeeper client. The instance will be `#uninitialize` when
-closing the bookkeeper client. The implementation of a placement policy should be responsible for releasing all the
-resources that allocated during `#initialize`.
-
-How to choose bookies to place
-______________________________
-
-The bookkeeper client discovers list of bookies from zookeeper via `BookieWatcher` - whenever there are bookie changes,
-the ensemble placement policy will be notified with new list of bookies via `onClusterChanged(writableBookie, readOnlyBookies)`.
-The implementation of the ensemble placement policy will react on those changes to build new network topology. Subsequent
-operations like `newEnsemble` or `replaceBookie` hence can operate on the new network topology.
-
-newEnsemble(ensembleSize, writeQuorumSize, ackQuorumSize, excludeBookies)
-    Choose `ensembleSize` bookies for ensemble. If the count is more than the number of available nodes,
-    `BKNotEnoughBookiesException` is thrown.
-
-replaceBookie(ensembleSize, writeQuorumSize, ackQuorumSize, currentEnsemble, bookieToReplace, excludeBookies)
-    Choose a new bookie to replace `bookieToReplace`. If no bookie available in the cluster,
-    `BKNotEnoughBookiesException` is thrown.
-
-
-Both `RackAware` and `RegionAware` placement policies are `TopologyAware` policies. They build a `NetworkTopology` on
-responding bookie changes, use it for ensemble placement and ensure rack/region coverage for write quorums - a write
-quorum should be covered by at least two racks or regions.
-
-Network Topology
-^^^^^^^^^^^^^^^^
-
-The network topology is presenting a cluster of bookies in a tree hierarchical structure. For example, a bookie cluster
-may be consists of many data centers (aka regions) filled with racks of machines. In this tree structure, leaves
-represent bookies and inner nodes represent switches/routes that manage traffic in/out of regions or racks.
-
-For example, there are 3 bookies in region `A`. They are `bk1`, `bk2` and `bk3`. And their network locations are
-`/region-a/rack-1/bk1`, `/region-a/rack-1/bk2` and `/region-a/rack-2/bk3`. So the network topology will look like below:
-
-::
-
-              root
-               |
-           region-a
-             /  \
-        rack-1  rack-2
-         /  \       \
-       bk1  bk2     bk3
-
-Another example, there are 4 bookies spanning in two regions `A` and `B`. They are `bk1`, `bk2`, `bk3` and `bk4`. And
-their network locations are `/region-a/rack-1/bk1`, `/region-a/rack-1/bk2`, `/region-b/rack-2/bk3` and `/region-b/rack-2/bk4`.
-The network topology will look like below:
-
-::
-
-                    root
-                    /  \
-             region-a  region-b
-                |         |
-              rack-1    rack-2
-               / \       / \
-             bk1  bk2  bk3  bk4
-
-The network location of each bookie is resolved by a `DNSResolver` (interface is described as below). The `DNSResolver`
-resolves a list of DNS-names or IP-addresses into a list of network locations. The network location that is returned
-must be a network path of the form `/region/rack`, where `/` is the root, and `region` is the region id representing
-the data center where `rack` is located. The network topology of the bookie cluster would determine the number of
-components in the network path.
-
-::
-
-    /**
-     * An interface that must be implemented to allow pluggable
-     * DNS-name/IP-address to RackID resolvers.
-     *
-     */
-    @Beta
-    public interface DNSToSwitchMapping {
-        /**
-         * Resolves a list of DNS-names/IP-addresses and returns back a list of
-         * switch information (network paths). One-to-one correspondence must be
-         * maintained between the elements in the lists.
-         * Consider an element in the argument list - x.y.com. The switch information
-         * that is returned must be a network path of the form /foo/rack,
-         * where / is the root, and 'foo' is the switch where 'rack' is connected.
-         * Note the hostname/ip-address is not part of the returned path.
-         * The network topology of the cluster would determine the number of
-         * components in the network path.
-         * <p/>
-         *
-         * If a name cannot be resolved to a rack, the implementation
-         * should return {@link NetworkTopology#DEFAULT_RACK}. This
-         * is what the bundled implementations do, though it is not a formal requirement
-         *
-         * @param names the list of hosts to resolve (can be empty)
-         * @return list of resolved network paths.
-         * If <i>names</i> is empty, the returned list is also empty
-         */
-        public List<String> resolve(List<String> names);
-
-        /**
-         * Reload all of the cached mappings.
-         *
-         * If there is a cache, this method will clear it, so that future accesses
-         * will get a chance to see the new data.
-         */
-        public void reloadCachedMappings();
-    }
-
-By default, the network topology responds to bookie changes immediately. That means if a bookie's znode appears in  or
-disappears from zookeeper, the network topology will add the bookie or remove the bookie immediately. It introduces
-instability when bookie's zookeeper registration becomes flapping. In order to address this, there is a `StabilizeNetworkTopology`
-which delays removing bookies from network topology if they disappear from zookeeper. It could be enabled by setting
-the following option.
-
-::
-
-    # enable stabilize network topology by setting it to a positive value.
-    bkc.networkTopologyStabilizePeriodSeconds=10
-
-
-RackAware and RegionAware
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-`RackAware` placement policy basically just chooses bookies from different racks in the built network topology. It
-guarantees that a write quorum will cover at least two racks.
-
-`RegionAware` placement policy is a hierarchical placement policy, which it chooses equal-sized bookies from regions, and
-within each region it uses `RackAware` placement policy to choose bookies from racks. For example, if there is 3 regions -
-`region-a`, `region-b` and `region-c`, an application want to allocate a 15-bookies ensemble. First, it would figure
-out there are 3 regions and it should allocate 5 bookies from each region. Second, for each region, it would use
-`RackAware` placement policy to choose 5 bookies.
-
-How to choose bookies to do speculative reads?
-______________________________________________
-
-`reorderReadSequence` and `reorderReadLACSequence` are two methods exposed by the placement policy, to help client
-determine a better read sequence according to the network topology and the bookie failure history.
-
-In `RackAware` placement policy, the reads will be tried in following sequence:
-
-- bookies are writable and didn't experience failures before
-- bookies are writable and experienced failures before
-- bookies are readonly
-- bookies already disappeared from network topology
-
-In `RegionAware` placement policy, the reads will be tried in similar following sequence as `RackAware` placement policy.
-There is a slight different on trying writable bookies: after trying every 2 bookies from local region, it would try
-a bookie from remote region. Hence it would achieve low latency even there is network issues within local region.
-
-How to enable different EnsemblePlacementPolicy?
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Users could configure using different ensemble placement policies by setting following options in distributedlog
-configuration files.
-
-::
-
-    # enable rack-aware ensemble placement policy
-    bkc.ensemblePlacementPolicy=org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy
-    # enable region-aware ensemble placement policy
-    bkc.ensemblePlacementPolicy=org.apache.bookkeeper.client.RegionAwareEnsemblePlacementPolicy
-
-The network topology of bookies built by either `RackawareEnsemblePlacementPolicy` or `RegionAwareEnsemblePlacementPolicy`
-is done via a `DNSResolver`. The default `DNSResolver` is a script based DNS resolver. It reads the configuration
-parameters, executes any defined script, handles errors and resolves domain names to network locations. The script
-is configured via following settings in distributedlog configuration.
-
-::
-
-    bkc.networkTopologyScriptFileName=/path/to/dns/resolver/script
-
-Alternatively, the `DNSResolver` could be configured in following settings and loaded via reflection. `DNSResolverForRacks`
-is a good example to check out for customizing your dns resolver based our network environments.
-
-::
-
-    bkEnsemblePlacementDnsResolverClass=com.twitter.distributedlog.net.DNSResolverForRacks
-

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/implementation/writeproxy.rst
----------------------------------------------------------------------
diff --git a/docs/implementation/writeproxy.rst b/docs/implementation/writeproxy.rst
deleted file mode 100644
index e69de29..0000000

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 0000000..dba7bba
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,49 @@
+---
+layout: default
+title: "Apache DistributedLog Documentation"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# Overview
+
+This documentation is for Apache DistributedLog version {{ site.distributedlog_version }}.
+
+Apache DistributedLog is a high-throughput, low-latency replicated log service, offering durability,
+replication and strong consistency as essentials for building reliable real-time applications. 
+
+## First Steps
+
+- **Concepts**: Start with the [basic concepts]({{ site.baseurl }}/basics/introduction) of DistributedLog.
+  This will help you to fully understand the other parts of the documentation, including the setup, integration,
+  and operation guides. It is highly recommended to read this first.
+
+- **Quickstarts**: [Run DistributedLog]({{ site.baseurl }}/start/quickstart) on your local machine or follow the tutorial to [write a simple program]({{ site.baseurl }}/tutorials/basic-1) to interact with _DistributedLog_.
+
+- **Setup**: The [docker]({{ site.baseurl }}/deployment/docker) and [cluster]({{ site.baseurl }}/deployment/cluster) setup guides show how to deploy DistributedLog Stack.
+
+- **Programming Guide**: You can check out our guides about [basic concepts]({{ site.baseurl }}/basics/introduction) and the [Core Library API]({{ site.baseurl }}/user_guide/api/core) or [Proxy Client API]({{ site.baseurl }}/user_guide/api/proxy) to learn how to use DistributedLog to build your reliable real-time services.
+
+## Next Steps
+
+- **Design Documents**: Learn about the [architecture]({{ site.baseurl }}/user_guide/architecture/main), [design considerations]({{ site.baseurl }}/user_guide/design/main) and [implementation details]({{ site.baseurl }}/user_guide/implementation/main) of DistributedLog.
+
+- **Tutorials**: You can check out the [tutorials]({{ site.baseurl }}/tutorials/main) on how to build real applications.
+
+- **Operation Guide**: You can check out our guides about how to [operate]({{ site.baseurl }}/admin_guide/main) the DistributedLog Stack.

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/index.rst
----------------------------------------------------------------------
diff --git a/docs/index.rst b/docs/index.rst
deleted file mode 100644
index 72b9c69..0000000
--- a/docs/index.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-.. markdowninclude:: ../README.md
-
-Documentation
-=============
-
-.. toctree::
-   :maxdepth: 2
-
-   download
-   basics/main
-   api/main
-   configuration/main
-   considerations/main
-   architecture/main
-   design/main
-   globalreplicatedlog/main
-   implementation/main
-   operations/main
-   performance/main
-   references/main
-   tutorials/main
-   developer/main
-   faq

http://git-wip-us.apache.org/repos/asf/incubator-distributedlog/blob/b169db04/docs/js/bootstrap-sprockets.js
----------------------------------------------------------------------
diff --git a/docs/js/bootstrap-sprockets.js b/docs/js/bootstrap-sprockets.js
new file mode 100755
index 0000000..37468b3
--- /dev/null
+++ b/docs/js/bootstrap-sprockets.js
@@ -0,0 +1,12 @@
+//= require ./bootstrap/affix
+//= require ./bootstrap/alert
+//= require ./bootstrap/button
+//= require ./bootstrap/carousel
+//= require ./bootstrap/collapse
+//= require ./bootstrap/dropdown
+//= require ./bootstrap/modal
+//= require ./bootstrap/scrollspy
+//= require ./bootstrap/tab
+//= require ./bootstrap/transition
+//= require ./bootstrap/tooltip
+//= require ./bootstrap/popover