You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@bookkeeper.apache.org by si...@apache.org on 2017/08/11 10:11:37 UTC

[bookkeeper] branch branch-4.5 updated (3bbb6aa -> 6028fe2)

This is an automated email from the ASF dual-hosted git repository.

sijie pushed a change to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git.


    from 3bbb6aa  Issue #416: Some small post-release documentation fixes
     new ebd02a5  ISSUE #397: [CI] publish-website job failed when mvn:release bump version to 4.6.0-SNAPSHOT
     new 6f6a32f  ISSUE #338: add first draft Docker image including community suggestions
     new 6a55406  Fix zkCli issue
     new 77d2bdf  Flip apache baseurl from `/test/content` to `/`
     new d14a397  Fix typo: TSL should be TLS in server_conf
     new 3b59b7c  release 4.5.0: update website
     new 17721bc  406
     new 004f4b5  ISSUE #427: [WEBSITE] sidebar doesn't work on documentation index page
     new 0617810  ISSUE #356: Release notes 4.5.0
     new 6028fe2  ISSUE #432: Add "Google Analytics" to the website

The 10 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 bookkeeper-server/conf/bk_server.conf              |  22 +-
 docker/Dockerfile                                  |  58 +++
 docker/Makefile                                    | 196 ++++++++
 docker/README.md                                   | 174 +++++++
 docker/scripts/apply-config-from-env.py            |  85 ++++
 docker/scripts/entrypoint.sh                       |  72 +++
 .../scripts/healthcheck.sh                         |  23 +-
 site/Makefile                                      |   2 +-
 site/_config.apache.yml                            |   2 +-
 site/_config.yml                                   |   5 +-
 site/_data/config/bk_server.yaml                   |  22 +-
 site/_includes/google-analytics.html               |  26 ++
 site/_includes/navbar.html                         |   4 +-
 site/_layouts/default.html                         |   3 +
 site/docs/{latest => 4.5.0}/admin/autorecovery.md  |   0
 site/docs/{latest => 4.5.0}/admin/bookies.md       |   2 +-
 .../{latest => 4.5.0}/admin/geo-replication.md     |   0
 site/docs/{latest => 4.5.0}/admin/metrics.md       |   2 +-
 site/docs/{latest => 4.5.0}/admin/perf.md          |   0
 site/docs/{latest => 4.5.0}/admin/placement.md     |   0
 site/docs/{latest => 4.5.0}/admin/upgrade.md       |   0
 .../{latest => 4.5.0}/api/distributedlog-api.md    |   4 +-
 site/docs/{latest => 4.5.0}/api/ledger-adv-api.md  |   0
 site/docs/{latest => 4.5.0}/api/ledger-api.md      |   6 +-
 site/docs/{latest => 4.5.0}/api/overview.md        |   0
 site/docs/{latest => 4.5.0}/deployment/dcos.md     |   2 +-
 .../{latest => 4.5.0}/deployment/kubernetes.md     |   0
 site/docs/{latest => 4.5.0}/deployment/manual.md   |   2 +-
 .../docs/{latest => 4.5.0}/development/codebase.md |   0
 .../docs/{latest => 4.5.0}/development/protocol.md |   0
 site/docs/{latest => 4.5.0}/example.md             |   0
 .../{latest => 4.5.0}/getting-started/concepts.md  |   0
 .../getting-started/installation.md                |   0
 .../getting-started/run-locally.md                 |   0
 .../index.md => 4.5.0/overview/overview.md}        |  18 +-
 site/docs/4.5.0/overview/releaseNotes.md           | 509 +++++++++++++++++++++
 site/docs/4.5.0/overview/releaseNotesTemplate.md   |  17 +
 site/docs/{latest => 4.5.0}/reference/cli.md       |   0
 site/docs/{latest => 4.5.0}/reference/config.md    |   2 +-
 site/docs/{latest => 4.5.0}/reference/metrics.md   |   0
 site/docs/{latest => 4.5.0}/security/overview.md   |   0
 site/docs/{latest => 4.5.0}/security/sasl.md       |   0
 site/docs/{latest => 4.5.0}/security/tls.md        |   0
 site/docs/{latest => 4.5.0}/security/zookeeper.md  |   0
 site/docs/latest/admin/bookies.md                  |   2 +-
 site/docs/latest/admin/metrics.md                  |   2 +-
 site/docs/latest/deployment/manual.md              |   2 +-
 .../docs/latest/{index.md => overview/overview.md} |  16 +-
 site/docs/latest/{ => overview}/releaseNotes.md    |   2 +-
 .../latest/{ => overview}/releaseNotesTemplate.md  |   2 +-
 site/docs/latest/reference/config.md               |   2 +-
 site/releases.md                                   |  10 +
 site/scripts/javadoc-gen.sh                        |   2 +-
 site/scripts/release.sh                            |   2 +-
 54 files changed, 1222 insertions(+), 78 deletions(-)
 create mode 100644 docker/Dockerfile
 create mode 100644 docker/Makefile
 create mode 100644 docker/README.md
 create mode 100755 docker/scripts/apply-config-from-env.py
 create mode 100755 docker/scripts/entrypoint.sh
 copy bookkeeper-server/src/test/resources/networkmappingscript.sh => docker/scripts/healthcheck.sh (62%)
 create mode 100644 site/_includes/google-analytics.html
 copy site/docs/{latest => 4.5.0}/admin/autorecovery.md (100%)
 copy site/docs/{latest => 4.5.0}/admin/bookies.md (99%)
 copy site/docs/{latest => 4.5.0}/admin/geo-replication.md (100%)
 copy site/docs/{latest => 4.5.0}/admin/metrics.md (99%)
 copy site/docs/{latest => 4.5.0}/admin/perf.md (100%)
 copy site/docs/{latest => 4.5.0}/admin/placement.md (100%)
 copy site/docs/{latest => 4.5.0}/admin/upgrade.md (100%)
 copy site/docs/{latest => 4.5.0}/api/distributedlog-api.md (99%)
 copy site/docs/{latest => 4.5.0}/api/ledger-adv-api.md (100%)
 copy site/docs/{latest => 4.5.0}/api/ledger-api.md (99%)
 copy site/docs/{latest => 4.5.0}/api/overview.md (100%)
 copy site/docs/{latest => 4.5.0}/deployment/dcos.md (98%)
 copy site/docs/{latest => 4.5.0}/deployment/kubernetes.md (100%)
 copy site/docs/{latest => 4.5.0}/deployment/manual.md (99%)
 copy site/docs/{latest => 4.5.0}/development/codebase.md (100%)
 copy site/docs/{latest => 4.5.0}/development/protocol.md (100%)
 copy site/docs/{latest => 4.5.0}/example.md (100%)
 copy site/docs/{latest => 4.5.0}/getting-started/concepts.md (100%)
 copy site/docs/{latest => 4.5.0}/getting-started/installation.md (100%)
 copy site/docs/{latest => 4.5.0}/getting-started/run-locally.md (100%)
 copy site/docs/{latest/index.md => 4.5.0/overview/overview.md} (66%)
 create mode 100644 site/docs/4.5.0/overview/releaseNotes.md
 create mode 100644 site/docs/4.5.0/overview/releaseNotesTemplate.md
 copy site/docs/{latest => 4.5.0}/reference/cli.md (100%)
 copy site/docs/{latest => 4.5.0}/reference/config.md (90%)
 copy site/docs/{latest => 4.5.0}/reference/metrics.md (100%)
 copy site/docs/{latest => 4.5.0}/security/overview.md (100%)
 copy site/docs/{latest => 4.5.0}/security/sasl.md (100%)
 copy site/docs/{latest => 4.5.0}/security/tls.md (100%)
 copy site/docs/{latest => 4.5.0}/security/zookeeper.md (100%)
 rename site/docs/latest/{index.md => overview/overview.md} (68%)
 rename site/docs/latest/{ => overview}/releaseNotes.md (91%)
 rename site/docs/latest/{ => overview}/releaseNotesTemplate.md (82%)

-- 
To stop receiving notification emails like this one, please contact
['"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>'].

[bookkeeper] 06/10: release 4.5.0: update website

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 3b59b7c76bc1027e2bf2c5299f8e4fcdcdd34367
Author: Sijie Guo <si...@apache.org>
AuthorDate: Thu Aug 10 12:56:48 2017 -0700

    release 4.5.0: update website
    
    Descriptions of the changes in this PR:
    
    under `site`, run `./scripts/release.sh`
    
    what the script does:
    
    - copy the `latest` to `4.5.0`.
    - finalize the version
    - update config yml
    
    Author: Sijie Guo <si...@apache.org>
    
    Reviewers: Matteo Merli <mm...@apache.org>
    
    This closes #429 from sijie/update_website_for_4.5_release
---
 site/_config.yml                                   |   5 +-
 site/docs/4.5.0/admin/autorecovery.md              | 128 ++++++
 site/docs/{latest => 4.5.0}/admin/bookies.md       |   2 +-
 site/docs/4.5.0/admin/geo-replication.md           |  22 +
 site/docs/{latest => 4.5.0}/admin/metrics.md       |   2 +-
 site/docs/4.5.0/admin/perf.md                      |   3 +
 site/docs/4.5.0/admin/placement.md                 |   3 +
 site/docs/4.5.0/admin/upgrade.md                   |  73 ++++
 site/docs/4.5.0/api/distributedlog-api.md          | 395 +++++++++++++++++
 site/docs/4.5.0/api/ledger-adv-api.md              |  82 ++++
 site/docs/4.5.0/api/ledger-api.md                  | 473 +++++++++++++++++++++
 site/docs/4.5.0/api/overview.md                    |  17 +
 site/docs/4.5.0/deployment/dcos.md                 | 142 +++++++
 site/docs/4.5.0/deployment/kubernetes.md           |   4 +
 site/docs/{latest => 4.5.0}/deployment/manual.md   |   2 +-
 site/docs/4.5.0/development/codebase.md            |   3 +
 site/docs/4.5.0/development/protocol.md            | 148 +++++++
 site/docs/4.5.0/example.md                         |   6 +
 site/docs/4.5.0/getting-started/concepts.md        | 202 +++++++++
 site/docs/4.5.0/getting-started/installation.md    |  74 ++++
 site/docs/4.5.0/getting-started/run-locally.md     |  16 +
 .../index.md => 4.5.0/overview/overview.md}        |  18 +-
 site/docs/4.5.0/overview/releaseNotes.md           |  17 +
 site/docs/4.5.0/overview/releaseNotesTemplate.md   |  17 +
 site/docs/4.5.0/reference/cli.md                   |  10 +
 site/docs/{latest => 4.5.0}/reference/config.md    |   2 +-
 site/docs/4.5.0/reference/metrics.md               |   3 +
 site/docs/4.5.0/security/overview.md               |  21 +
 site/docs/4.5.0/security/sasl.md                   | 202 +++++++++
 site/docs/4.5.0/security/tls.md                    | 210 +++++++++
 site/docs/4.5.0/security/zookeeper.md              |  41 ++
 site/docs/latest/admin/bookies.md                  |   2 +-
 site/docs/latest/admin/metrics.md                  |   2 +-
 site/docs/latest/deployment/manual.md              |   2 +-
 site/docs/latest/index.md                          |   2 +-
 site/docs/latest/reference/config.md               |   2 +-
 site/docs/latest/releaseNotes.md                   |   2 +-
 site/docs/latest/releaseNotesTemplate.md           |   2 +-
 site/releases.md                                   |   4 +
 39 files changed, 2339 insertions(+), 22 deletions(-)

diff --git a/site/_config.yml b/site/_config.yml
index 5b22804..5340755 100644
--- a/site/_config.yml
+++ b/site/_config.yml
@@ -10,6 +10,7 @@ twitter_url: https://twitter.com/asfbookkeeper
 livereload: true
 
 versions:
+- "4.5.0"
 # [next_version_placehodler]
 
 archived_versions:
@@ -24,8 +25,8 @@ archived_versions:
 - "4.2.0"
 - "4.1.0"
 - "4.0.0"
-latest_version: "4.5.0-SNAPSHOT"
-latest_release: "4.4.0"
+latest_version: "4.6.0-SNAPSHOT"
+latest_release: "4.5.0"
 stable_release: "4.4.0"
 distributedlog_version: "2.1.0-0.4.0"
 
diff --git a/site/docs/4.5.0/admin/autorecovery.md b/site/docs/4.5.0/admin/autorecovery.md
new file mode 100644
index 0000000..64c6beb
--- /dev/null
+++ b/site/docs/4.5.0/admin/autorecovery.md
@@ -0,0 +1,128 @@
+---
+title: Using AutoRecovery
+---
+
+When a {% pop bookie %} crashes, all {% pop ledgers %} on that bookie become under-replicated. In order to bring all ledgers in your BookKeeper cluster back to full replication, you'll need to *recover* the data from any offline bookies. There are two ways to recover bookies' data:
+
+1. Using [manual recovery](#manual-recovery)
+1. Automatically, using [*AutoRecovery*](#autorecovery)
+
+## Manual recovery
+
+You can manually recover failed bookies using the [`bookkeeper`](../../reference/cli) command-line tool. You need to specify:
+
+* that the `org.apache.bookkeeper.tools.BookKeeperTools` class needs to be run
+* an IP and port for your BookKeeper cluster's ZooKeeper ensemble
+* the IP and port for the failed bookie
+
+Here's an example:
+
+```bash
+$ bookkeeper-server/bin/bookkeeper org.apache.bookkeeper.tools.BookKeeperTools \
+  zk1.example.com:2181 \ # IP and port for ZooKeeper
+  192.168.1.10:3181      # IP and port for the failed bookie
+```
+
+If you wish, you can also specify which bookie you'd like to rereplicate to. Here's an example:
+
+```bash
+$ bookkeeper-server/bin/bookkeeper org.apache.bookkeeper.tools.BookKeeperTools \
+  zk1.example.com:2181 \ # IP and port for ZooKeeper
+  192.168.1.10:3181 \    # IP and port for the failed bookie
+  192.168.1.11:3181      # IP and port for the bookie to rereplicate to
+```
+
+### The manual recovery process
+
+When you initiate a manual recovery process, the following happens:
+
+1. The client (the process running ) reads the metadata of active ledgers from ZooKeeper.
+1. The ledgers that contain segments from the failed bookie in their ensemble are selected.
+1. A recovery process is initiated for each ledger in this list and the rereplication process is run for each ledger.
+1. Once all the ledgers are marked as fully replicated, bookie recovery is finished.
+
+## AutoRecovery
+
+AutoRecovery is a process that:
+
+* automatically detects when a {% pop bookie %} in your BookKeeper cluster has become unavailable and then
+* rereplicates all the {% pop ledgers %} that were stored on that bookie.
+
+AutoRecovery can be run in two ways:
+
+1. On dedicated nodes in your BookKeeper cluster
+1. On the same machines on which your bookies are running
+
+## Running AutoRecovery
+
+You can start up AutoRecovery using the [`autorecovery`](../../reference/cli#bookkeeper-autorecovery) command of the [`bookkeeper`](../../reference/cli) CLI tool.
+
+```bash
+$ bookkeeper-server/bin/bookkeeper autorecovery
+```
+
+> The most important thing to ensure when starting up AutoRecovery is that the ZooKeeper connection string specified by the [`zkServers`](../../reference/config#zkServers) parameter points to the right ZooKeeper cluster.
+
+If you start up AutoRecovery on a machine that is already running a bookie, then the AutoRecovery process will run alongside the bookie on a separate thread.
+
+You can also start up AutoRecovery on a fresh machine if you'd like to create a dedicated cluster of AutoRecovery nodes.
+
+## Configuration
+
+There are a handful of AutoRecovery-related configs in the [`bk_server.conf`](../../reference/config) configuration file. For a listing of those configs, see [AutoRecovery settings](../../reference/config#autorecovery-settings).
+
+## Disable AutoRecovery
+
+You can disable AutoRecovery at any time, for example during maintenance. Disabling AutoRecovery ensures that bookies' data isn't unnecessarily rereplicated when the bookie is only taken down for a short period of time, for example when the bookie is being updated or the configuration if being changed.
+
+You can disable AutoRecover using the [`bookkeeper`](../../reference/cli#bookkeeper-shell-autorecovery) CLI tool:
+
+```bash
+$ bookkeeper-server/bin/bookkeeper shell autorecovery -disable
+```
+
+Once disabled, you can reenable AutoRecovery using the [`enable`](../../reference/cli#bookkeeper-shell-autorecovery) shell command:
+
+```bash
+$ bookkeeper-server/bin/bookkeeper shell autorecovery -enable
+```
+
+## AutoRecovery architecture
+
+AutoRecovery has two components:
+
+1. The [**auditor**](#auditor) (see the [`Auditor`](../../api/javadoc/org/apache/bookkeeper/replication/Auditor.html) class) is a singleton node that watches bookies to see if they fail and creates rereplication tasks for the ledgers on failed bookies.
+1. The [**replication worker**](#replication-worker) (see the [`ReplicationWorker`](../../api/javadoc/org/apache/bookkeeper/replication/ReplicationWorker.html) class) runs on each bookie and executes rereplication tasks provided by the auditor.
+
+Both of these components run as threads in the [`AutoRecoveryMain`](../../api/javadoc/org/apache/bookkeeper/replication/AutoRecoveryMain) process, which runs on each bookie in the cluster. All recovery nodes participate in leader election---using ZooKeeper---to decide which node becomes the auditor. Nodes that fail to become the auditor watch the elected auditor and run an election process again if they see that the auditor node has failed.
+
+### Auditor
+
+The auditor watches all bookies in the cluster that are registered with ZooKeeper. Bookies register with ZooKeeper at startup. If the bookie crashes or is killed, the bookie's registration in ZooKeeper disappears and the auditor is notified of the change in the list of registered bookies.
+
+When the auditor sees that a bookie has disappeared, it immediately scans the complete {% pop ledger %} list to find ledgers that have data stored on the failed bookie. Once it has a list of ledgers for that bookie, the auditor will publish a rereplication task for each ledger under the `/underreplicated/` [znode](https://zookeeper.apache.org/doc/current/zookeeperOver.html) in ZooKeeper.
+
+### Replication Worker
+
+Each replication worker watches for tasks being published by the auditor on the `/underreplicated/` znode in ZooKeeper. When a new task appears, the replication worker will try to get a lock on it. If it cannot acquire the lock, it will try the next entry. The locks are implemented using ZooKeeper ephemeral znodes.
+
+The replication worker will scan through the rereplication task's ledger for segments of which its local bookie is not a member. When it finds segments matching this criterion, it will replicate the entries of that segment to the local bookie. If, after this process, the ledger is fully replicated, the ledgers entry under /underreplicated/ is deleted, and the lock is released. If there is a problem replicating, or there are still segments in the ledger which are still underreplicated (du [...]
+
+If the replication worker finds a segment which needs rereplication, but does not have a defined endpoint (i.e. the final segment of a ledger currently being written to), it will wait for a grace period before attempting rereplication. If the segment needing rereplication still does not have a defined endpoint, the ledger is fenced and rereplication then takes place.
+
+This avoids the situation in which a client is writing to a ledger and one of the bookies goes down, but the client has not written an entry to that bookie before rereplication takes place. The client could continue writing to the old segment, even though the ensemble for the segment had changed. This could lead to data loss. Fencing prevents this scenario from happening. In the normal case, the client will try to write to the failed bookie within the grace period, and will have started  [...]
+
+You can configure this grace period using the [`openLedgerRereplicationGracePeriod`](../../reference/config#openLedgerRereplicationGracePeriod) parameter.
+
+### The rereplication process
+
+The ledger rereplication process happens in these steps:
+
+1. The client goes through all ledger segments in the ledger, selecting those that contain the failed bookie.
+1. A recovery process is initiated for each ledger segment in this list.
+   1. The client selects a bookie to which all entries in the ledger segment will be replicated; In the case of autorecovery, this will always be the local bookie.
+   1. The client reads entries that belong to the ledger segment from other bookies in the ensemble and writes them to the selected bookie.
+   1. Once all entries have been replicated, the zookeeper metadata for the segment is updated to reflect the new ensemble.
+   1. The segment is marked as fully replicated in the recovery tool.
+1. Once all ledger segments are marked as fully replicated, the ledger is marked as fully replicated.
+  
diff --git a/site/docs/latest/admin/bookies.md b/site/docs/4.5.0/admin/bookies.md
similarity index 99%
copy from site/docs/latest/admin/bookies.md
copy to site/docs/4.5.0/admin/bookies.md
index d9c6959..f9b1dcf 100644
--- a/site/docs/latest/admin/bookies.md
+++ b/site/docs/4.5.0/admin/bookies.md
@@ -177,4 +177,4 @@ If the change was the result of an accidental configuration change, the change c
      192.168.1.10:3181
    ```
 
-   See the [AutoRecovery](../autorecovery) documentation for more info on the re-replication process.
\ No newline at end of file
+   See the [AutoRecovery](../autorecovery) documentation for more info on the re-replication process.
diff --git a/site/docs/4.5.0/admin/geo-replication.md b/site/docs/4.5.0/admin/geo-replication.md
new file mode 100644
index 0000000..38b9723
--- /dev/null
+++ b/site/docs/4.5.0/admin/geo-replication.md
@@ -0,0 +1,22 @@
+---
+title: Geo-replication
+subtitle: Replicate data across BookKeeper clusters
+---
+
+*Geo-replication* is the replication of data across BookKeeper clusters. In order to enable geo-replication for a group of BookKeeper clusters,
+
+## Global ZooKeeper
+
+Setting up a global ZooKeeper quorum is a lot like setting up a cluster-specific quorum. The crucial difference is that
+
+### Geo-replication across three clusters
+
+Let's say that you want to set up geo-replication across clusters in regions A, B, and C. First, the BookKeeper clusters in each region must have their own local (cluster-specific) ZooKeeper quorum.
+
+> BookKeeper clusters use global ZooKeeper only for metadata storage. Traffic from bookies to ZooKeeper should thus be fairly light in general.
+
+The crucial difference between using cluster-specific ZooKeeper and global ZooKeeper is that {% pop bookies %} is that you need to point all bookies to use the global ZooKeeper setup.
+
+## Region-aware placement polocy
+
+## Autorecovery
diff --git a/site/docs/latest/admin/metrics.md b/site/docs/4.5.0/admin/metrics.md
similarity index 99%
copy from site/docs/latest/admin/metrics.md
copy to site/docs/4.5.0/admin/metrics.md
index e2595d6..635135f 100644
--- a/site/docs/latest/admin/metrics.md
+++ b/site/docs/4.5.0/admin/metrics.md
@@ -38,4 +38,4 @@ To enable stats:
 <!-- ## Enabling stats in the bookkeeper library
 
 TODO
--->
\ No newline at end of file
+-->
diff --git a/site/docs/4.5.0/admin/perf.md b/site/docs/4.5.0/admin/perf.md
new file mode 100644
index 0000000..8295632
--- /dev/null
+++ b/site/docs/4.5.0/admin/perf.md
@@ -0,0 +1,3 @@
+---
+title: Performance tuning
+---
diff --git a/site/docs/4.5.0/admin/placement.md b/site/docs/4.5.0/admin/placement.md
new file mode 100644
index 0000000..ded456e
--- /dev/null
+++ b/site/docs/4.5.0/admin/placement.md
@@ -0,0 +1,3 @@
+---
+title: Customized placement policies
+---
diff --git a/site/docs/4.5.0/admin/upgrade.md b/site/docs/4.5.0/admin/upgrade.md
new file mode 100644
index 0000000..456df99
--- /dev/null
+++ b/site/docs/4.5.0/admin/upgrade.md
@@ -0,0 +1,73 @@
+---
+title: Upgrade
+---
+
+> If you have questions about upgrades (or need help), please feel free to reach out to us by [mailing list]({{ site.baseurl }}community/mailing-lists) or [Slack Channel]({{ site.baseurl }}community/slack).
+
+## Overview
+
+Consider the below guidelines in preparation for upgrading.
+
+- Always back up all your configuration files before upgrading.
+- Read through the documentation and draft an upgrade plan that matches your specific requirements and environment before starting the upgrade process.
+    Put differently, don't start working through the guide on a live cluster. Read guide entirely, make a plan, then execute the plan.
+- Pay careful consideration to the order in which components are upgraded. In general, you need to upgrade bookies first and then upgrade your clients.
+- If autorecovery is running along with bookies, you need to pay attention to the upgrade sequence.
+- Read the release notes carefully for each release. They contain not only information about noteworthy features, but also changes to configurations
+    that may impact your upgrade.
+- Always upgrade one or a small set of bookies to canary new version before upgraing all bookies in your cluster.
+
+## Canary
+
+It is wise to canary an upgraded version in one or small set of bookies before upgrading all bookies in your live cluster.
+
+You can follow below steps on how to canary a upgraded version:
+
+1. Stop a Bookie.
+2. Upgrade the binary and configuration.
+3. Start the Bookie in `ReadOnly` mode. This can be used to verify if the Bookie of this new version can run well for read workload.
+4. Once the Bookie is running at `ReadOnly` mode successfully for a while, restart the Bookie in `Write/Read` mode.
+5. After step 4, the Bookie will serve both write and read traffic.
+
+### Rollback Canaries
+
+If problems occur during canarying an upgraded version, you can simply take down the problematic Bookie node. The remain bookies in the old cluster
+will repair this problematic bookie node by autorecovery. Nothing needs to be worried about.
+
+## Upgrade Steps
+
+Once you determined a version is safe to upgrade in a few nodes in your cluster, you can perform following steps to upgrade all bookies in your cluster.
+
+1. Determine if autorecovery is running along with bookies. If yes, check if the clients (either new clients with new binary or old clients with new configurations)
+are allowed to talk to old bookies; if clients are not allowed to talk to old bookies, please [disable autorecovery](../../reference/cli/#autorecovery-1) during upgrade.
+2. Decide on performing a rolling upgrade or a downtime upgrade.
+3. Upgrade all Bookies (more below)
+4. If autorecovery was disabled during upgrade, [enable autorecovery](../../reference/cli/#autorecovery-1).
+5. After all bookies are upgraded, build applications that use `BookKeeper client` against the new bookkeeper libraries and deploy the new versions.
+
+### Upgrade Bookies
+
+In a rolling upgrade scenario, upgrade one Bookie at a time. In a downtime upgrade scenario, take the entire cluster down, upgrade each Bookie, then start the cluster.
+
+For each Bookie:
+
+1. Stop the bookie. 
+2. Upgrade the software (either new binary or new configuration)
+2. Start the bookie.
+
+## Upgrade Guides
+
+We describes the general upgrade method in Apache BookKeeper as above. We will cover the details for individual versions.
+
+### 4.4.x to 4.5.x upgrade
+
+There isn't any protocol related backward compabilities changes in 4.5.0. So you can follow the general upgrade sequence to upgrade from 4.4.x to 4.5.x.
+However, we list a list of things that you might want to know.
+
+1. 4.5.x upgrades netty from 3.x to 4.x. The memory usage pattern might be changed a bit. Netty 4 uses more direct memory. Please pay attention to your memory usage
+    and adjust the JVM settings accordingly.
+2. `multi journals` is a non-rollbackable feature. If you configure a bookie to use multiple journals on 4.5.x you can not roll the bookie back to use 4.4.x. You have
+    to take a bookie out and recover it if you want to rollback to 4.4.x.
+
+If you are planning to upgrade a non-secured cluster to a secured cluster enabling security features in 4.5.0, please read [BookKeeper Security](../../security/overview) for more details.
+
diff --git a/site/docs/4.5.0/api/distributedlog-api.md b/site/docs/4.5.0/api/distributedlog-api.md
new file mode 100644
index 0000000..7bb4029
--- /dev/null
+++ b/site/docs/4.5.0/api/distributedlog-api.md
@@ -0,0 +1,395 @@
+---
+title: DistributedLog
+subtitle: A higher-level API for managing BookKeeper entries
+---
+
+> DistributedLog began its life as a separate project under the Apache Foundation. It was merged into BookKeeper in 2017.
+
+The DistributedLog API is an easy-to-use interface for managing BookKeeper entries that enables you to use BookKeeper without needing to interact with [ledgers](../ledger-api) directly.
+
+DistributedLog (DL) maintains sequences of records in categories called *logs* (aka *log streams*). *Writers* append records to DL logs, while *readers* fetch and process those records.
+
+## Architecture
+
+The diagram below illustrates how the DistributedLog API works with BookKeeper:
+
+![DistributedLog API](../../../img/distributedlog.png)
+
+## Logs
+
+A *log* in DistributedLog is an ordered, immutable sequence of *log records*.
+
+The diagram below illustrates the anatomy of a log stream:
+
+![DistributedLog log](../../../img/logs.png)
+
+### Log records
+
+Each log record is a sequence of bytes. Applications are responsible for serializing and deserializing byte sequences stored in log records.
+
+Log records are written sequentially into a *log stream* and assigned with a a unique sequence number called a DLSN (<strong>D</strong>istributed<strong>L</strong>og <strong>S</strong>equence <strong>N</strong>umber).
+
+In addition to a DLSN, applications can assign their own sequence number when constructing log records. Application-defined sequence numbers are known as *TransactionIDs* (or *txid*). Either a DLSN or a TransactionID can be used for positioning readers to start reading from a specific log record.
+
+### Log segments
+
+Each log is broken down into *log segments* that contain subsets of records. Log segments are distributed and stored in BookKeeper. DistributedLog rolls the log segments based on the configured *rolling policy*, which be either
+
+* a configurable period of time (such as every 2 hours), or
+* a configurable maximum size (such as every 128 MB).
+
+The data in logs is divided up into equally sized log segments and distributed evenly across {% pop bookies %}. This allows logs to scale beyond a size that would fit on a single server and spreads read traffic across the cluster.
+
+### Namespaces
+
+Log streams that belong to the same organization are typically categorized and managed under a *namespace*. DistributedLog namespaces essentially enable applications to locate log streams. Applications can perform the following actions under a namespace:
+
+* create streams
+* delete streams
+* truncate streams to a given sequence number (either a DLSN or a TransactionID)
+
+## Writers
+
+Through the DistributedLog API, writers write data into logs of their choice. All records are appended into logs in order. The sequencing is performed by the writer, which means that there is only one active writer for a log at any given time.
+
+DistributedLog guarantees correctness when two writers attempt to write to the same log when a network partition occurs using a *fencing* mechanism in the log segment store.
+
+### Write Proxy
+
+Log writers are served and managed in a service tier called the *Write Proxy* (see the diagram [above](#architecture)). The Write Proxy is used for accepting writes from a large number of clients.
+
+## Readers
+
+DistributedLog readers read records from logs of their choice, starting with a provided position. The provided position can be either a DLSN or a TransactionID.
+
+Readers read records from logs in strict order. Different readers can read records from different positions in the same log.
+
+Unlike other pub-sub systems, DistributedLog doesn't record or manage readers' positions. This means that tracking is the responsibility of applications, as different applications may have different requirements for tracking and coordinating positions. This is hard to get right with a single approach. Distributed databases, for example, might store reader positions along with SSTables, so they would resume applying transactions from the positions store in SSTables. Tracking reader positi [...]
+
+### Read Proxy
+
+Log records can be cached in a service tier called the *Read Proxy* to serve a large number of readers. See the diagram [above](#architecture). The Read Proxy is the analogue of the [Write Proxy](#write-proxy).
+
+## Guarantees
+
+The DistributedLog API for BookKeeper provides a number of guarantees for applications:
+
+* Records written by a [writer](#writers) to a [log](#logs) are appended in the order in which they are written. If a record **R1** is written by the same writer as a record **R2**, **R1** will have a smaller sequence number than **R2**.
+* [Readers](#readers) see [records](#log-records) in the same order in which they are [written](#writers) to the log.
+* All records are persisted on disk by BookKeeper before acknowledgements, which guarantees durability.
+* For a log with a replication factor of N, DistributedLog tolerates up to N-1 server failures without losing any records.
+
+## API
+
+Documentation for the DistributedLog API can be found [here](https://distributedlog.incubator.apache.org/docs/latest/user_guide/api/core).
+
+> At a later date, the DistributedLog API docs will be added here.
+
+<!--
+
+The DistributedLog core library is written in Java and interacts with namespaces and logs directly.
+
+### Installation
+
+The BookKeeper Java client library is available via [Maven Central](http://search.maven.org/) and can be installed using [Maven](#maven), [Gradle](#gradle), and other build tools.
+
+### Maven
+
+If you're using [Maven](https://maven.apache.org/), add this to your [`pom.xml`](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html) build configuration file:
+
+```xml
+<-- in your <properties> block ->
+<bookkeeper.version>{{ site.distributedlog_version }}</bookkeeper.version>
+
+<-- in your <dependencies> block ->
+<dependency>
+  <groupId>org.apache.bookkeeper</groupId>
+  <artifactId>bookkeeper-server</artifactId>
+  <version>${bookkeeper.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you're using [Gradle](https://gradle.org/), add this to your [`build.gradle`](https://spring.io/guides/gs/gradle/) build configuration file:
+
+```groovy
+dependencies {
+    compile group: 'org.apache.bookkeeper', name: 'bookkeeper-server', version: '4.5.0'
+}
+
+// Alternatively:
+dependencies {
+    compile 'org.apache.bookkeeper:bookkeeper-server:4.5.0'
+}
+```
+
+### Namespace API
+
+A DL [namespace](#namespace) is a collection of [log streams](#log-streams). When using the DistributedLog API with BookKeeper, you need to provide your Java client with a namespace URI. That URI consists of three elements:
+
+1. The `distributedlog-bk` scheme
+1. A connection string for your BookKeeper cluster. You have three options for the connection string:
+   * An entire ZooKeeper connection string, for example `zk1:2181,zk2:2181,zk3:2181`
+   * A host and port for one node in your ZooKeeper cluster, for example `zk1:2181`. In general, it's better to provide a full ZooKeeper connection string.
+   * If your ZooKeeper cluster can be discovered via DNS, you can provide the DNS name, for example `my-zookeeper-cluster.com`.
+1. A path that points to the location where logs are stored. This could be a ZooKeeper [znode](https://zookeeper.apache.org/doc/current/zookeeperOver.html).
+  
+This is the general structure of a namespace URI:
+
+```shell
+distributedlog-bk://{connection-string}/{path}
+```
+
+Here are some example URIs:
+
+```shell
+distributedlog-bk://zk1:2181,zk2:2181,zk3:2181/my-namespace # Full ZooKeeper connection string
+distributedlog-bk://localhost:2181/my-namespace             # Single ZooKeeper node
+distributedlog-bk://my-zookeeper-cluster.com/my-namespace   # DNS name for ZooKeeper
+```
+
+#### Creating namespaces
+
+In order to create namespaces, you need to use the command-line tool.
+
+```shell
+$ 
+```
+
+#### Using namespaces
+
+Once you have a namespace URI, you can build a namespace instance, which will be used for operating streams. Use the `DistributedLogNamespaceBuilder` to build a `DistributedLogNamespace` object, passing in a `DistributedLogConfiguration`, a URI, and optionally a stats logger and a feature provider.
+
+```java
+DistributedLogConfiguration conf = new DistributedLogConfiguration();
+URI uri = URI.create("distributedlog-bk://localhost:2181/my-namespace ");
+DistributedLogNamespaceBuilder builder = DistributedLogNamespaceBuilder.newBuilder();
+DistributedLogNamespace = builder
+        .conf(conf)           // Configuration for the namespace
+        .uri(uri)             // URI for the namespace
+        .statsLogger(...)     // Stats logger for statistics
+        .featureProvider(...) // Feature provider for controlling features
+        .build();
+```
+
+### Log API
+
+#### Creating logs
+
+You can create a log by calling the `createLog` method on a `DistributedLogNamespace` object, passing in a name for the log. This creates the log under the namespace but does *not* return a handle for operating the log.
+
+```java
+DistributedLogNamespace namespace = /* Create namespace */;
+try {
+    namespace.createLog("test-log");
+} catch (IOException e) }
+    // Handle the log creation exception
+}
+```
+
+#### Opening logs
+
+A `DistributedLogManager` handle will be returned when opening a log using the `openLog` function, which takes the name of the log. This handle can be used for writing records to or reading records from the log.
+
+> If the log doesn't exist and `createStreamIfNotExists` is set to `true` in the configuration, the log will be created automatically when writing the first record.
+
+```java
+DistributedLogConfiguration conf = new DistributedLogConfiguration();
+conf.setCreateStreamIfNotExists(true);
+DistributedLogNamespace namespace = DistributedLogNamespace.newBuilder()
+        .conf(conf)
+        // Other builder attributes
+        .build();
+DistributedLogManager logManager = namespace.openLog("test-log");
+```
+
+Sometimes, applications may open a log with a different configuration from the enclosing namespace. This can be done using the same `openLog` method:
+
+```java
+// Namespace configuration
+DistributedLogConfiguration namespaceConf = new DistributedLogConfiguration();
+conf.setRetentionPeriodHours(24);
+URI uri = URI.create("distributedlog-bk://localhost:2181/my-namespace");
+DistributedLogNamespace namespace = DistributedLogNamespace.newBuilder()
+        .conf(namespaceConf)
+        .uri(uri)
+        // Other builder attributes
+        .build();
+// Log-specific configuration
+DistributedLogConfiguration logConf = new DistributedLogConfiguration();
+logConf.setRetentionPeriodHours(12);
+DistributedLogManager logManager = namespace.openLog(
+        "test-log",
+        Optional.of(logConf),
+        Optional.absent()
+);
+```
+
+#### Deleting logs
+
+The `DistributedLogNamespace` class provides `deleteLog` function that can be used to delete logs. When you delete a lot, the client library will attempt to acquire a lock on the log before deletion. If the log is being written to by an active writer, deletion will fail (as the other writer currently holds the lock).
+
+```java
+try {
+    namespace.deleteLog("test-log");
+} catch (IOException e) {
+    // Handle exception
+}
+```
+
+#### Checking for the existence of a log
+
+Applications can check whether a log exists by calling the `logExists` function.
+
+```java
+if (namespace.logExists("test-log")) {
+  // Perform some action when the log exists
+} else {
+  // Perform some action when the log doesn't exist
+}
+```
+
+#### Listing logs
+
+Applications can retrieve a list of all logs under a namespace using the `getLogs` function.
+
+```java
+Iterator<String> logs = namespace.getLogs();
+while (logs.hasNext()) {
+  String logName = logs.next();
+  // Do something with the log name, such as print
+}
+```
+
+### Writer API
+
+You can write to DistributedLog logs either [synchronously](#writing-to-logs-synchronously) using the `LogWriter` class or [asynchronously](#writing-to-logs-asynchronously) using the `AsyncLogWriter` class.
+
+#### Immediate flush
+
+By default, records are buffered rather than being written immediately. You can disable this behavior and make DL writers write ("flush") entries immediately by adding the following to your configuration object:
+
+```java
+conf.setImmediateFlushEnabled(true);
+conf.setOutputBufferSize(0);
+conf.setPeriodicFlushFrequencyMilliSeconds(0);
+```
+
+#### Immediate locking
+
+By default, DL writers can write to a log stream when other writers are also writing to that stream. You can override this behavior and disable other writers from writing to the stream by adding this to your configuration:
+
+```java
+conf.setLockTimeout(DistributedLogConstants.LOCK_IMMEDIATE);
+```
+
+#### Writing to logs synchronously
+
+To write records to a log synchronously, you need to instantiate a `LogWriter` object using a `DistributedLogManager`. Here's an example:
+
+```java
+DistributedLogNamespace namespace = /* Some namespace object */;
+DistributedLogManager logManager = namespace.openLog("test-log");
+LogWriter writer = logManager.startLogSegmentNonPartitioned();
+```
+
+> The DistributedLog library enforces single-writer semantics by deploying a ZooKeeper locking mechanism. If there is only one active writer, subsequent calls to `startLogSegmentNonPartitioned` will fail with an `OwnershipAcquireFailedException`.
+
+Log records represent the data written to a log stream. Each log record is associated with an application-defined [TransactionID](#log-records). This ID must be non decreasing or else writing a record will be rejected with `TransactionIdOutOfOrderException`. The application is allowed to bypass the TransactionID sanity checking by setting `maxIdSanityCheck` to `false` in the configuration. System time and atomic numbers are good candidates for TransactionID.
+
+```java
+long txid = 1L;
+byte[] data = "some byte array".getBytes();
+LogRecord record = new LogRecord(txid, data);
+```
+
+Your application can write either a single record, using the `write` method, or many records, using the `writeBulk` method.
+
+```java
+// Single record
+writer.write(record);
+
+// Bulk write
+List<LogRecord> records = Lists.newArrayList();
+records.add(record);
+writer.writeBulk(records);
+```
+
+The write calls return immediately after the records are added into the output buffer of writer. This means that the data isn't guaranteed to be durable until the writer explicitly calls `setReadyToFlush` and `flushAndSync`. Those two calls will first transmit buffered data to the backend, wait for transmit acknowledgements (acks), and commit the written data to make them visible to readers.
+
+```java
+// Flush the records
+writer.setReadyToFlush();
+
+// Commit the records to make them visible to readers
+writer.flushAndSync();
+```
+
+Log streams in DistributedLog are endless streams *unless they are sealed*. Endless in this case means that writers can keep writing records to those streams, readers can keep reading from the end of those streams, and the process never stops. Your application can seal a log stream using the `markEndOfStream` method:
+
+```java
+writer.markEndOfStream();
+```
+
+#### Writing to logs asynchronously
+
+In order to write to DistributedLog logs asynchronously, you need to create an `AsyncLogWriter` instread of a `LogWriter`.
+
+```java
+DistributedLogNamespace namespace = /* Some namespace object */;
+DistributedLogManager logManager = namespace.openLog("test-async-log");
+AsyncLogWriter asyncWriter = logManager.startAsyncLogSegmentNonPartitioned();
+```
+
+All writes to `AsyncLogWriter` are non partitioned. The futures representing write results are only satisfied when the data is durably persisted in the stream. A [DLSN](#log-records) will be returned for each write, which is used to represent the position (aka offset) of the record in the log stream. All the records added in order are guaranteed to be persisted in order. Here's an example of an async writer that gathers a list of futures representing multiple async write results:
+
+```java
+List<Future<DLSN>> addFutures = Lists.newArrayList();
+for (long txid = 1L; txid <= 100L; txid++) {
+    byte[] data = /* some byte array */;
+    LogRecord record = new LogRecord(txid, data);
+    addFutures.add(asyncWriter.write(record));
+}
+List<DLSN> addResults = Await.result(Future.collect(addFutures));
+```
+
+The `AsyncLogWriter` also provides a method for truncating a stream to a given DLSN. This is useful for building replicated state machines that need explicit controls on when the data can be deleted.
+
+```java
+DLSN truncateDLSN = /* some DLSN */;
+Future<DLSN> truncateFuture = asyncWriter.truncate(truncateDLSN);
+
+// Wait for truncation result
+Await.result(truncateFuture);
+```
+
+##### Register a listener
+
+Instead of returning a future from write operations, you can also set up a listener that performs assigned actions upon success or failure of the write. Here's an example:
+
+```java
+asyncWriter.addEventListener(new FutureEventListener<DLSN>() {
+    @Override
+    public void onFailure(Throwable cause) {
+        // Execute if the attempt fails
+    }
+
+    @Override
+    public void onSuccess(DLSN value) {
+        // Execute if the attempt succeeds
+    }
+});
+```
+
+##### Close the writer
+
+You can close an async writer when you're finished with it like this:
+
+```java
+FutureUtils.result(asyncWriter.asyncClose());
+```
+
+<!--
+TODO: Reader API
+-->
diff --git a/site/docs/4.5.0/api/ledger-adv-api.md b/site/docs/4.5.0/api/ledger-adv-api.md
new file mode 100644
index 0000000..f46950d
--- /dev/null
+++ b/site/docs/4.5.0/api/ledger-adv-api.md
@@ -0,0 +1,82 @@
+---
+title: The Advanced Ledger API
+---
+
+In release `4.5.0`, Apache BookKeeper introduces a few advanced API for advanced usage.
+This sections covers these advanced APIs.
+
+> Before learn the advanced API, please read [Ledger API](../ledger-api) first.
+
+## LedgerHandleAdv
+
+[`LedgerHandleAdv`](../javadoc/org/apache/bookkeeper/client/LedgerHandleAdv) is an advanced extension of [`LedgerHandle`](../javadoc/org/apache/bookkeeper/client/LedgerHandle).
+It allows user passing in an `entryId` when adding an entry.
+
+### Creating advanced ledgers
+
+Here's an exmaple:
+
+```java
+byte[] passwd = "some-passwd".getBytes();
+LedgerHandleAdv handle = bkClient.createLedgerAdv(
+    3, 3, 2, // replica settings
+    DigestType.CRC32,
+    passwd);
+```
+
+You can also create advanced ledgers asynchronously.
+
+```java
+class LedgerCreationCallback implements AsyncCallback.CreateCallback {
+    public void createComplete(int returnCode, LedgerHandle handle, Object ctx) {
+        System.out.println("Ledger successfully created");
+    }
+}
+client.asyncCreateLedgerAdv(
+        3, // ensemble size
+        3, // write quorum size
+        2, // ack quorum size
+        BookKeeper.DigestType.CRC32,
+        password,
+        new LedgerCreationCallback(),
+        "some context"
+);
+```
+
+Besides the APIs above, BookKeeper allows users providing `ledger-id` when creating advanced ledgers.
+
+```java
+long ledgerId = ...; // the ledger id is generated externally.
+
+byte[] passwd = "some-passwd".getBytes();
+LedgerHandleAdv handle = bkClient.createLedgerAdv(
+    ledgerId, // ledger id generated externally
+    3, 3, 2, // replica settings
+    DigestType.CRC32,
+    passwd);
+```
+
+> Please note, it is users' responsibility to provide a unique ledger id when using the API above.
+> If a ledger already exists when users try to create an advanced ledger with same ledger id,
+> a [LedgerExistsException](../javadoc/org/apache/bookkeeper/client/BKException.BKLedgerExistException.html) is thrown by the bookkeeper client.
+
+### Add Entries
+
+The normal [add entries api](ledger-api/#adding-entries-to-ledgers) in advanced ledgers are disabled. Instead, when users want to add entries
+to advanced ledgers, an entry id is required to pass in along with the entry data when adding an entry.
+
+```java
+long entryId = ...; // entry id generated externally
+
+ledger.addEntry(entryId, "Some entry data".getBytes());
+```
+
+A few notes when using this API:
+
+- The entry id has to be non-negative.
+- Clients are okay to add entries out of order.
+- However, the entries are only acknowledged in a monotonic order starting from 0.
+
+### Read Entries
+
+The read entries api in advanced ledgers remain same as [normal ledgers](../ledger-api/#reading-entries-from-ledgers).
diff --git a/site/docs/4.5.0/api/ledger-api.md b/site/docs/4.5.0/api/ledger-api.md
new file mode 100644
index 0000000..4e1070d
--- /dev/null
+++ b/site/docs/4.5.0/api/ledger-api.md
@@ -0,0 +1,473 @@
+---
+title: The Ledger API
+---
+
+The ledger API is a lower-level API for BookKeeper that enables you to interact with {% pop ledgers %} directly.
+
+## The Java ledger API client
+
+To get started with the Java client for BookKeeper, install the `bookkeeper-server` library as a dependency in your Java application.
+
+> For a more in-depth tutorial that involves a real use case for BookKeeper, see the [Example application](../example-application) guide.
+
+## Installation
+
+The BookKeeper Java client library is available via [Maven Central](http://search.maven.org/) and can be installed using [Maven](#maven), [Gradle](#gradle), and other build tools.
+
+### Maven
+
+If you're using [Maven](https://maven.apache.org/), add this to your [`pom.xml`](https://maven.apache.org/guides/introduction/introduction-to-the-pom.html) build configuration file:
+
+```xml
+<!-- in your <properties> block -->
+<bookkeeper.version>4.5.0</bookkeeper.version>
+
+<!-- in your <dependencies> block -->
+<dependency>
+  <groupId>org.apache.bookkeeper</groupId>
+  <artifactId>bookkeeper-server</artifactId>
+  <version>${bookkeeper.version}</version>
+</dependency>
+```
+
+### Gradle
+
+If you're using [Gradle](https://gradle.org/), add this to your [`build.gradle`](https://spring.io/guides/gs/gradle/) build configuration file:
+
+```groovy
+dependencies {
+    compile group: 'org.apache.bookkeeper', name: 'bookkeeper-server', version: '4.5.0'
+}
+
+// Alternatively:
+dependencies {
+    compile 'org.apache.bookkeeper:bookkeeper-server:4.5.0'
+}
+```
+
+## Connection string
+
+When interacting with BookKeeper using the Java client, you need to provide your client with a connection string, for which you have three options:
+
+* Provide your entire ZooKeeper connection string, for example `zk1:2181,zk2:2181,zk3:2181`.
+* Provide a host and port for one node in your ZooKeeper cluster, for example `zk1:2181`. In general, it's better to provide a full connection string (in case the ZooKeeper node you attempt to connect to is down).
+* If your ZooKeeper cluster can be discovered via DNS, you can provide the DNS name, for example `my-zookeeper-cluster.com`.
+
+## Creating a new client
+
+In order to create a new [`BookKeeper`](../javadoc/org/apache/bookkeeper/client/BookKeeper) client object, you need to pass in a [connection string](#connection-string). Here is an example client object using a ZooKeeper connection string:
+
+```java
+try {
+    String connectionString = "127.0.0.1:2181"; // For a single-node, local ZooKeeper cluster
+    BookKeeper bkClient = new BookKeeper(connectionString);
+} catch (InterruptedException | IOException | KeeperException e) {
+    e.printStackTrace();
+}
+```
+
+> If you're running BookKeeper [locally](../../getting-started/run-locally), using the [`localbookie`](../../reference/cli#bookkeeper-localbookie) command, use `"127.0.0.1:2181"` for your connection string, as in the example above.
+
+There are, however, other ways that you can create a client object:
+
+* By passing in a [`ClientConfiguration`](../javadoc/org/apache/bookkeeper/conf/ClientConfiguration) object. Here's an example:
+
+  ```java
+  ClientConfiguration config = new ClientConfiguration();
+  config.setZkServers(zkConnectionString);
+  config.setAddEntryTimeout(2000);
+  BookKeeper bkClient = new BookKeeper(config);
+  ```
+
+* By specifying a `ClientConfiguration` and a [`ZooKeeper`](http://zookeeper.apache.org/doc/current/api/org/apache/zookeeper/ZooKeeper.html) client object:
+
+  ```java
+  ClientConfiguration config = new ClientConfiguration();
+  config.setAddEntryTimeout(5000);
+  ZooKeeper zkClient = new ZooKeeper(/* client args */);
+  BookKeeper bkClient = new BookKeeper(config, zkClient);
+  ```
+
+* Using the `forConfig` method:
+
+  ```java
+  BookKeeper bkClient = BookKeeper.forConfig(conf).build();
+  ```
+
+## Creating ledgers
+
+The easiest way to create a {% pop ledger %} using the Java client is via the `createLedger` method, which creates a new ledger synchronously and returns a [`LedgerHandle`](../javadoc/org/apache/bookkeeper/client/LedgerHandle). You must specify at least a [`DigestType`](../javadoc/org/apache/bookkeeper/client/BookKeeper.DigestType) and a password.
+
+Here's an example:
+
+```java
+byte[] password = "some-password".getBytes();
+LedgerHandle handle = bkClient.createLedger(BookKeeper.DigestType.MAC, password);
+```
+
+You can also create ledgers asynchronously
+
+### Create ledgers asynchronously
+
+```java
+class LedgerCreationCallback implements AsyncCallback.CreateCallback {
+    public void createComplete(int returnCode, LedgerHandle handle, Object ctx) {
+        System.out.println("Ledger successfully created");
+    }
+}
+
+client.asyncCreateLedger(
+        3,
+        2,
+        BookKeeper.DigestType.MAC,
+        password,
+        new LedgerCreationCallback(),
+        "some context"
+);
+```
+
+## Adding entries to ledgers
+
+```java
+long entryId = ledger.addEntry("Some entry data".getBytes());
+```
+
+### Add entries asynchronously
+
+## Reading entries from ledgers
+
+```java
+Enumerator<LedgerEntry> entries = handle.readEntries(1, 99);
+```
+
+To read all possible entries from the ledger:
+
+```java
+Enumerator<LedgerEntry> entries =
+  handle.readEntries(0, handle.getLastAddConfirmed());
+
+while (entries.hasNextElement()) {
+    LedgerEntry entry = entries.nextElement();
+    System.out.println("Successfully read entry " + entry.getId());
+}
+```
+
+### Reading entries after the LastAddConfirmed range
+
+`readUnconfirmedEntries` allowing to read after the LastAddConfirmed range.
+It lets the client read without checking the local value of LastAddConfirmed, so that it is possible to read entries for which the writer has not received the acknowledge yet
+For entries which are within the range 0..LastAddConfirmed BookKeeper guarantees that the writer has successfully received the acknowledge.
+For entries outside that range it is possible that the writer never received the acknowledge and so there is the risk that the reader is seeing entries before the writer and this could result in a consistency issue in some cases.
+With this method you can even read entries before the LastAddConfirmed and entries after it with one call, the expected consistency will be as described above.
+
+```java
+Enumerator<LedgerEntry> entries =
+  handle.readUnconfirmedEntries(0, lastEntryIdExpectedToRead);
+
+while (entries.hasNextElement()) {
+    LedgerEntry entry = entries.nextElement();
+    System.out.println("Successfully read entry " + entry.getId());
+}
+```
+
+## Deleting ledgers
+
+{% pop Ledgers %} can also be deleted synchronously or asynchronously.
+
+```java
+long ledgerId = 1234;
+
+try {
+    bkClient.deleteLedger(ledgerId);
+} catch (Exception e) {
+  e.printStackTrace();
+}
+```
+
+### Delete entries asynchronously
+
+Exceptions thrown:
+
+*
+
+```java
+class DeleteEntryCallback implements AsyncCallback.DeleteCallback {
+    public void deleteComplete() {
+        System.out.println("Delete completed");
+    }
+}
+```
+
+## Simple example
+
+> For a more involved BookKeeper client example, see the [example application](#example-application) below.
+
+In the code sample below, a BookKeeper client:
+
+* creates a ledger
+* writes entries to the ledger
+* closes the ledger (meaning no further writes are possible)
+* re-opens the ledger for reading
+* reads all available entries
+
+```java
+// Create a client object for the local ensemble. This
+// operation throws multiple exceptions, so make sure to
+// use a try/catch block when instantiating client objects.
+BookKeeper bkc = new BookKeeper("localhost:2181");
+
+// A password for the new ledger
+byte[] ledgerPassword = /* some sequence of bytes, perhaps random */;
+
+// Create a new ledger and fetch its identifier
+LedgerHandle lh = bkc.createLedger(BookKeeper.DigestType.MAC, ledgerPassword);
+long ledgerId = lh.getId();
+
+// Create a buffer for four-byte entries
+ByteBuffer entry = ByteBuffer.allocate(4);
+
+int numberOfEntries = 100;
+
+// Add entries to the ledger, then close it
+for (int i = 0; i < numberOfEntries; i++){
+	entry.putInt(i);
+	entry.position(0);
+	lh.addEntry(entry.array());
+}
+lh.close();
+
+// Open the ledger for reading
+lh = bkc.openLedger(ledgerId, BookKeeper.DigestType.MAC, ledgerPassword);
+
+// Read all available entries
+Enumeration<LedgerEntry> entries = lh.readEntries(0, numberOfEntries - 1);
+
+while(entries.hasMoreElements()) {
+	ByteBuffer result = ByteBuffer.wrap(ls.nextElement().getEntry());
+	Integer retrEntry = result.getInt();
+
+    // Print the integer stored in each entry
+    System.out.println(String.format("Result: %s", retrEntry));
+}
+
+// Close the ledger and the client
+lh.close();
+bkc.close();
+```
+
+Running this should return this output:
+
+```shell
+Result: 0
+Result: 1
+Result: 2
+# etc
+```
+
+## Example application
+
+This tutorial walks you through building an example application that uses BookKeeper as the replicated log. The application uses the [BookKeeper Java client](../java-client) to interact with BookKeeper.
+
+> The code for this tutorial can be found in [this GitHub repo](https://github.com/ivankelly/bookkeeper-tutorial/). The final code for the `Dice` class can be found [here](https://github.com/ivankelly/bookkeeper-tutorial/blob/master/src/main/java/org/apache/bookkeeper/Dice.java).
+
+### Setup
+
+Before you start, you will need to have a BookKeeper cluster running locally on your machine. For installation instructions, see [Installation](../../getting-started/installation).
+
+To start up a cluster consisting of six {% pop bookies %} locally:
+
+```shell
+$ bookkeeper-server/bin/bookkeeper localbookie 6
+```
+
+You can specify a different number of bookies if you'd like.
+
+### Goal
+
+The goal of the dice application is to have
+
+* multiple instances of this application,
+* possibly running on different machines,
+* all of which display the exact same sequence of numbers.
+
+In other words, the log needs to be both durable and consistent, regardless of how many {% pop bookies %} are participating in the BookKeeper ensemble. If one of the bookies crashes or becomes unable to communicate with the other bookies in any way, it should *still* display the same sequence of numbers as the others. This tutorial will show you how to achieve this.
+
+To begin, download the base application, compile and run it.
+
+```shell
+$ git clone https://github.com/ivankelly/bookkeeper-tutorial.git
+$ mvn package
+$ mvn exec:java -Dexec.mainClass=org.apache.bookkeeper.Dice
+```
+
+That should yield output that looks something like this:
+
+```
+[INFO] Scanning for projects...
+[INFO]                                                                         
+[INFO] ------------------------------------------------------------------------
+[INFO] Building tutorial 1.0-SNAPSHOT
+[INFO] ------------------------------------------------------------------------
+[INFO]
+[INFO] --- exec-maven-plugin:1.3.2:java (default-cli) @ tutorial ---
+[WARNING] Warning: killAfter is now deprecated. Do you need it ? Please comment on MEXEC-6.
+Value = 4
+Value = 5
+Value = 3
+```
+
+### The base application
+
+The application in this tutorial is a dice application. The `Dice` class below has a `playDice` function that generates a random number between 1 and 6 every second, prints the value of the dice roll, and runs indefinitely.
+
+```java
+public class Dice {
+    Random r = new Random();
+
+    void playDice() throws InterruptedException {
+        while (true) {
+            Thread.sleep(1000);
+            System.out.println("Value = " + (r.nextInt(6) + 1));
+        }
+    }
+}
+```
+
+When you run the `main` function of this class, a new `Dice` object will be instantiated and then run indefinitely:
+
+```java
+public class Dice {
+    // other methods
+
+    public static void main(String[] args) throws InterruptedException {
+        Dice d = new Dice();
+        d.playDice();
+    }
+}
+```
+
+### Leaders and followers (and a bit of background)
+
+To achieve this common view in multiple instances of the program, we need each instance to agree on what the next number in the sequence will be. For example, the instances must agree that 4 is the first number and 2 is the second number and 5 is the third number and so on. This is a difficult problem, especially in the case that any instance may go away at any time, and messages between the instances can be lost or reordered.
+
+Luckily, there are already algorithms to solve this. Paxos is an abstract algorithm to implement this kind of agreement, while Zab and Raft are more practical protocols. This video gives a good overview about how these algorithms usually look. They all have a similar core.
+
+It would be possible to run the Paxos to agree on each number in the sequence. However, running Paxos each time can be expensive. What Zab and Raft do is that they use a Paxos-like algorithm to elect a leader. The leader then decides what the sequence of events should be, putting them in a log, which the other instances can then follow to maintain the same state as the leader.
+
+Bookkeeper provides the functionality for the second part of the protocol, allowing a leader to write events to a log and have multiple followers tailing the log. However, bookkeeper does not do leader election. You will need a zookeeper or raft instance for that purpose.
+
+### Why not just use ZooKeeper?
+
+There are a number of reasons:
+
+1. Zookeeper's log is only exposed through a tree like interface. It can be hard to shoehorn your application into this.
+2. A zookeeper ensemble of multiple machines is limited to one log. You may want one log per resource, which will become expensive very quickly.
+3. Adding extra machines to a zookeeper ensemble does not increase capacity nor throughput.
+
+Bookkeeper can be seen as a means of exposing ZooKeeper's replicated log to applications in a scalable fashion. ZooKeeper is still used by BookKeeper, however, to maintain consistency guarantees, though clients don't need to interact with ZooKeeper directly.
+
+### Electing a leader
+
+We'll use zookeeper to elect a leader. A zookeeper instance will have started locally when you started the localbookie application above. To verify it's running, run the following command.
+
+```shell
+$ echo stat | nc localhost 2181
+Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 GMT
+Clients:
+ /127.0.0.1:59343[1](queued=0,recved=40,sent=41)
+ /127.0.0.1:49354[1](queued=0,recved=11,sent=11)
+ /127.0.0.1:49361[0](queued=0,recved=1,sent=0)
+ /127.0.0.1:59344[1](queued=0,recved=38,sent=39)
+ /127.0.0.1:59345[1](queued=0,recved=38,sent=39)
+ /127.0.0.1:59346[1](queued=0,recved=38,sent=39)
+
+Latency min/avg/max: 0/0/23
+Received: 167
+Sent: 170
+Connections: 6
+Outstanding: 0
+Zxid: 0x11
+Mode: standalone
+Node count: 16
+```
+
+To interact with zookeeper, we'll use the Curator client rather than the stock zookeeper client. Getting things right with the zookeeper client can be tricky, and curator removes a lot of the pointy corners for you. In fact, curator even provides a leader election recipe, so we need to do very little work to get leader election in our application.
+
+```java
+public class Dice extends LeaderSelectorListenerAdapter implements Closeable {
+
+    final static String ZOOKEEPER_SERVER = "127.0.0.1:2181";
+    final static String ELECTION_PATH = "/dice-elect";
+
+    ...
+
+    Dice() throws InterruptedException {
+        curator = CuratorFrameworkFactory.newClient(ZOOKEEPER_SERVER,
+                2000, 10000, new ExponentialBackoffRetry(1000, 3));
+        curator.start();
+        curator.blockUntilConnected();
+
+        leaderSelector = new LeaderSelector(curator, ELECTION_PATH, this);
+        leaderSelector.autoRequeue();
+        leaderSelector.start();
+    }
+```
+
+In the constructor for Dice, we need to create the curator client. We specify four things when creating the client, the location of the zookeeper service, the session timeout, the connect timeout and the retry policy.
+
+The session timeout is a zookeeper concept. If the zookeeper server doesn't hear anything from the client for this amount of time, any leases which the client holds will be timed out. This is important in leader election. For leader election, the curator client will take a lease on ELECTION_PATH. The first instance to take the lease will become leader and the rest will become followers. However, their claim on the lease will remain in the cue. If the first instance then goes away, due to [...]
+
+Finally, you'll have noticed that Dice now extends LeaderSelectorListenerAdapter and implements Closeable. Closeable is there to close the resource we have initialized in the constructor, the curator client and the leaderSelector. LeaderSelectorListenerAdapter is a callback that the leaderSelector uses to notify the instance that it is now the leader. It is passed as the third argument to the LeaderSelector constructor.
+
+```java
+    @Override
+    public void takeLeadership(CuratorFramework client)
+            throws Exception {
+        synchronized (this) {
+            leader = true;
+            try {
+                while (true) {
+                    this.wait();
+                }
+            } catch (InterruptedException ie) {
+                Thread.currentThread().interrupt();
+                leader = false;
+            }
+        }
+    }
+```
+
+takeLeadership() is the callback called by LeaderSelector when the instance is leader. It should only return when the instance wants to give up leadership. In our case, we never do so we wait on the current object until we're interrupted. To signal to the rest of the program that we are leader we set a volatile boolean called leader to true. This is unset after we are interrupted.
+
+```java
+    void playDice() throws InterruptedException {
+        while (true) {
+            while (leader) {
+                Thread.sleep(1000);
+                System.out.println("Value = " + (r.nextInt(6) + 1)
+                                   + ", isLeader = " + leader);
+            }
+        }
+    }
+```
+
+Finally, we modify the `playDice` function to only generate random numbers when it is the leader.
+
+Run two instances of the program in two different terminals. You'll see that one becomes leader and prints numbers and the other just sits there.
+
+Now stop the leader using Control-Z. This will pause the process, but it won't kill it. You will be dropped back to the shell in that terminal. After a couple of seconds, the session timeout, you will see that the other instance has become the leader. Zookeeper will guarantee that only one instance is selected as leader at any time.
+
+Now go back to the shell that the original leader was on and wake up the process using fg. You'll see something like the following:
+
+```shell
+...
+...
+Value = 4, isLeader = true
+Value = 4, isLeader = true
+^Z
+[1]+  Stopped                 mvn exec:java -Dexec.mainClass=org.apache.bookkeeper.Dice
+$ fg
+mvn exec:java -Dexec.mainClass=org.apache.bookkeeper.Dice
+Value = 3, isLeader = true
+Value = 1, isLeader = false
+```
diff --git a/site/docs/4.5.0/api/overview.md b/site/docs/4.5.0/api/overview.md
new file mode 100644
index 0000000..3eb6492
--- /dev/null
+++ b/site/docs/4.5.0/api/overview.md
@@ -0,0 +1,17 @@
+---
+title: BookKeeper API
+---
+
+BookKeeper offers a few APIs that applications can use to interact with it:
+
+* The [ledger API](../ledger-api) is a lower-level API that enables you to interact with {% pop ledgers %} directly
+* The [Ledger Advanced API)(../ledger-adv-api) is an advanced extension to [Ledger API](../ledger-api) to provide more flexibilities to applications.
+* The [DistributedLog API](../distributedlog-api) is a higher-level API that provides convenient abstractions.
+
+## Trade-offs
+
+The `Ledger API` provides direct access to ledgers and thus enables you to use BookKeeper however you'd like.
+
+However, in most of use cases, if you want a `log stream`-like abstraction, it requires you to manage things like tracking list of ledgers,
+managing rolling ledgers and data retention on your own. In such cases, you are recommended to use [DistributedLog API](../distributedlog-api),
+with semantics resembling continous log streams from the standpoint of applications.
diff --git a/site/docs/4.5.0/deployment/dcos.md b/site/docs/4.5.0/deployment/dcos.md
new file mode 100644
index 0000000..5f0c732
--- /dev/null
+++ b/site/docs/4.5.0/deployment/dcos.md
@@ -0,0 +1,142 @@
+---
+title: Deploying BookKeeper on DC/OS
+subtitle: Get up and running easily on an Apache Mesos cluster
+logo: img/dcos-logo.png
+---
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter <strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating system used for deploying and managing applications and systems on [Apache Mesos](http://mesos.apache.org/). DC/OS is an open-source tool created and maintained by [Mesosphere](https://mesosphere.com/).
+
+BookKeeper is available as a [DC/OS package](http://universe.dcos.io/#/package/bookkeeper/version/latest) from the [Mesosphere DC/OS Universe](http://universe.dcos.io/#/packages).
+
+## Prerequisites
+
+In order to run BookKeeper on DC/OS, you will need:
+
+* DC/OS version [1.8](https://dcos.io/docs/1.8/) or higher
+* A DC/OS cluster with at least three nodes
+* The [DC/OS CLI tool](https://dcos.io/docs/1.8/usage/cli/install/) installed
+
+Each node in your DC/OS-managed Mesos cluster must have at least:
+
+* 1 CPU
+* 1 GB of memory
+* 10 GB of total persistent disk storage
+
+## Installing BookKeeper
+
+```shell
+$ dcos package install bookkeeper --yes
+```
+
+This command will:
+
+* Install the `bookkeeper` subcommand for the `dcos` CLI tool
+* Start a single {% pop bookie %} on the Mesos cluster with the [default configuration](../../reference/config)
+
+The bookie that is automatically started up uses the host mode of the network and by default exports the service at `agent_ip:3181`.
+
+> If you run `dcos package install bookkeeper` without setting the `--yes` flag, the install will run in interactive mode. For more information on the `package install` command, see the [DC/OS docs](https://docs.mesosphere.com/latest/cli/command-reference/dcos-package/dcos-package-install/).
+
+### Services
+
+To watch BookKeeper start up, click on the **Services** tab in the DC/OS [user interface](https://docs.mesosphere.com/latest/gui/) and you should see the `bookkeeper` package listed:
+
+![DC/OS services](../../../img/dcos/services.png)
+
+### Tasks
+
+To see which tasks have started, click on the `bookkeeper` service and you'll see an interface that looks like this;
+
+![DC/OS tasks](../../../img/dcos/tasks.png)
+
+## Scaling BookKeeper
+
+Once the first {% pop bookie %} has started up, you can click on the **Scale** tab to scale up your BookKeeper ensemble by adding more bookies (or scale down the ensemble by removing bookies).
+
+![DC/OS scale](../../../img/dcos/scale.png)
+
+## ZooKeeper Exhibitor
+
+ZooKeeper contains the information for all bookies in the ensemble. When deployed on DC/OS, BookKeeper uses a ZooKeeper instance provided by DC/OS. You can access a visual UI for ZooKeeper using [Exhibitor](https://github.com/soabase/exhibitor/wiki), which is available at [http://master.dcos/exhibitor](http://master.dcos/exhibitor).
+
+![ZooKeeper Exhibitor](../../../img/dcos/exhibitor.png)
+
+You should see a listing of IP/host information for all bookies under the `messaging/bookkeeper/ledgers/available` node.
+
+## Client connections
+
+To connect to bookies running on DC/OS using clients running within your Mesos cluster, you need to specify the ZooKeeper connection string for DC/OS's ZooKeeper cluster:
+
+```
+master.mesos:2181
+```
+
+This is the *only* ZooKeeper host/port you need to include in your connection string. Here's an example using the [Java client](../../api/ledger-api#the-java-ledger-api-client):
+
+```java
+BookKeeper bkClient = new BookKeeper("master.mesos:2181");
+```
+
+If you're connecting using a client running outside your Mesos cluster, you need to supply the public-facing connection string for your DC/OS ZooKeeper cluster.
+
+## Configuring BookKeeper
+
+By default, the `bookkeeper` package will start up a BookKeeper ensemble consisting of one {% pop bookie %} with one CPU, 1 GB of memory, and a 70 MB persistent volume.
+
+You can supply a non-default configuration when installing the package using a JSON file. Here's an example command:
+
+```shell
+$ dcos package install bookkeeper \
+  --options=/path/to/config.json
+```
+
+You can then fetch the current configuration for BookKeeper at any time using the `package describe` command:
+
+```shell
+$ dcos package describe bookkeeper \
+  --config
+```
+
+### Available parameters
+
+> Not all [configurable parameters](../../reference/config) for BookKeeper are available for BookKeeper on DC/OS. Only the parameters show in the table below are available.
+
+Param | Type | Description | Default
+:-----|:-----|:------------|:-------
+`name` | String | The name of the DC/OS service. | `bookkeeper`
+`cpus` | Integer | The number of CPU shares to allocate to each {% pop bookie %}. The minimum is 1. | `1` |
+`instances` | Integer | The number of {% pop bookies %} top run. The minimum is 1. | `1`
+`mem` | Number | The memory, in MB, to allocate to each BookKeeper task | `1024.0` (1 GB)
+`volume_size` | Number | The persistent volume size, in MB | `70`
+`zk_client` | String | The connection string for the ZooKeeper client instance | `master.mesos:2181`
+`service_port` | Integer | The BookKeeper export service port, using `PORT0` in Marathon | `3181`
+
+### Example JSON configuration
+
+Here's an example JSON configuration object for BookKeeper on DC/OS:
+
+```json
+{
+  "instances": 5,
+  "cpus": 3,
+  "mem": 2048.0,
+  "volume_size": 250
+}
+```
+
+If that configuration were stored in a file called `bk-config.json`, you could apply that configuration upon installating the BookKeeper package using this command:
+
+```shell
+$ dcos package install bookkeeper \
+  --options=./bk-config.json
+```
+
+## Uninstalling BookKeeper
+
+You can shut down and uninstall the `bookkeeper` from DC/OS at any time using the `package uninstall` command:
+
+```shell
+$ dcos package uninstall bookkeeper
+Uninstalled package [bookkeeper] version [4.5.0]
+Thank you for using bookkeeper.
+```
diff --git a/site/docs/4.5.0/deployment/kubernetes.md b/site/docs/4.5.0/deployment/kubernetes.md
new file mode 100644
index 0000000..f651721
--- /dev/null
+++ b/site/docs/4.5.0/deployment/kubernetes.md
@@ -0,0 +1,4 @@
+---
+title: Deploying BookKeeper on Kubernetes
+logo: img/kubernetes-logo.png
+---
diff --git a/site/docs/latest/deployment/manual.md b/site/docs/4.5.0/deployment/manual.md
similarity index 99%
copy from site/docs/latest/deployment/manual.md
copy to site/docs/4.5.0/deployment/manual.md
index 654595f..daafd55 100644
--- a/site/docs/latest/deployment/manual.md
+++ b/site/docs/4.5.0/deployment/manual.md
@@ -53,4 +53,4 @@ Once cluster metadata formatting has been completed, your BookKeeper cluster is
 ## AutoRecovery
 
 [this guide](../../admin/autorecovery)
--->
\ No newline at end of file
+-->
diff --git a/site/docs/4.5.0/development/codebase.md b/site/docs/4.5.0/development/codebase.md
new file mode 100644
index 0000000..9a83073
--- /dev/null
+++ b/site/docs/4.5.0/development/codebase.md
@@ -0,0 +1,3 @@
+---
+title: The BookKeeper codebase
+---
diff --git a/site/docs/4.5.0/development/protocol.md b/site/docs/4.5.0/development/protocol.md
new file mode 100644
index 0000000..6d17aa0
--- /dev/null
+++ b/site/docs/4.5.0/development/protocol.md
@@ -0,0 +1,148 @@
+---
+title: The BookKeeper protocol
+---
+
+BookKeeper uses a special replication protocol for guaranteeing persistent storage of entries in an ensemble of bookies.
+
+> This document assumes that you have some knowledge of leader election and log replication and how these can be used in a distributed system. If not, we recommend reading the [example application](../../api/ledger-api#example-application) documentation first.
+
+## Ledgers
+
+{% pop Ledgers %} are the basic building block of BookKeeper and the level at which BookKeeper makes its persistent storage guarantees. A replicated log consists of an ordered list of ledgers. See [Ledgers to logs](#ledgers-to-logs) for info on building a replicated log from ledgers.
+
+Ledgers are composed of metadata and {% pop entries %}. The metadata is stored in ZooKeeper, which provides a *compare-and-swap* (CAS) operation. Entries are stored on storage nodes known as {% pop bookies %}.
+
+A ledger has a single writer and multiple readers (SWMR).
+
+### Ledger metadata
+
+A ledger's metadata contains the following:
+
+Parameter | Name | Meaning
+:---------|:-----|:-------
+Identifer | | A 64-bit integer, unique within the system
+Ensemble size | **E** | The number of nodes the ledger is stored on
+Write quorum size | **Q<sub>w</sub>** | The number of nodes each entry is written to. In effect, the max replication for the entry.
+Ack quorum size | **Q<sub>a</sub>** | The number of nodes an entry must be acknowledged on. In effect, the minimum replication for the entry.
+Current state | | The current status of the ledger. One of `OPEN`, `CLOSED`, or `IN_RECOVERY`.
+Last entry | | The last entry in the ledger or `NULL` is the current state is not `CLOSED`.
+
+In addition, each ledger's metadata consists of one or more *fragments*. Each fragment is either
+
+* the first entry of a fragment or
+* a list of bookies for the fragment.
+
+When creating a ledger, the following invariant must hold:
+
+**E >= Q<sub>w</sub> >= Qa**
+
+Thus, the ensemble size (**E**) must be larger than the write quorum size (**Q<sub>w</sub>**), which must in turn be larger than the ack quorum size (**Q<sub>a</sub>**). If that condition does not hold, then the ledger creation operation will fail.
+
+### Ensembles
+
+When a ledger is created, **E** bookies are chosen for the entries of that ledger. The bookies are the initial ensemble of the ledger. A ledger can have multiple ensembles, but an entry has only one ensemble. Changes in the ensemble involve a new fragment being added to the ledger.
+
+Take the following example. In this ledger, with ensemble size of 3, there are two fragments and thus two ensembles, one starting at entry 0, the second at entry 12. The second ensemble differs from the first only by its first element. This could be because bookie1 has failed and therefore had to be replaced.
+
+First entry | Bookies
+:-----------|:-------
+0 | B1, B2, B3
+12 | B4, B2, B3
+
+### Write quorums
+
+Each entry in the log is written to **Q<sub>w</sub>** nodes. This is considered the write quorum for that entry. The write quorum is the subsequence of the ensemble, **Q<sub>w</sub>** in length, and starting at the bookie at index (entryid % **E**).
+
+For example, in a ledger of **E** = 4, **Q<sub>w</sub>**, and **Q<sub>a</sub>** = 2, with an ensemble consisting of B1, B2, B3, and B4, the write quorums for the first 6 entries will be:
+
+Entry | Write quorum
+:-----|:------------
+0 | B1, B2, B3
+1 | B2, B3, B4
+2 | B3, B4, B1
+3 | B4, B1, B2
+4 | B1, B2, B3
+5 | B2, B3, B4
+
+There are only **E** distinct write quorums in any ensemble. If **Q<sub>w</sub>** = **Q<sub>a</sub>**, then there is only one, as no striping occurs.
+
+### Ack quorums
+
+The ack quorum for an entry is any subset of the write quorum of size **Q<sub>a</sub>**. If **Q<sub>a</sub>** bookies acknowledge an entry, it means it has been fully replicated.
+
+### Guarantees
+
+The system can tolerate **Q<sub>a</sub>** – 1 failures without data loss.
+
+Bookkeeper guarantees that:
+
+1. All updates to a ledger will be read in the same order as they were written.
+2. All clients will read the same sequence of updates from the ledger.
+
+## Writing to ledgers
+
+writer, ensuring that entry ids are sequential is trivial. A bookie acknowledges a write once it has been persisted to disk and is therefore durable. Once **Q<sub>a</sub>** bookies from the write quorum acknowledge the write, the write is acknowledged to the client, but only if all entries with lower entry ids in the ledger have already been acknowledged to the client.
+
+The entry written contains the ledger id, the entry id, the last add confirmed and the payload. The last add confirmed is the last entry which had been acknowledged to the client when this entry was written. Sending this with the entry speeds up recovery of the ledger in the case that the writer crashes.
+
+Another client can also read entries in the ledger up as far as the last add confirmed, as we guarantee that all entries thus far have been replicated on Qa nodes, and therefore all future readers will be able to also read it. However, to read like this, the ledger should be opened with a non-fencing open. Otherwise, it would kill the writer.
+
+If a node fails to acknowledge a write, the writer will create a new ensemble by replacing the failed node in the current ensemble. It creates a new fragment with this ensemble, starting from the first message that has not been acknowledged to the client. Creating the new fragment involves making a CAS write to the metadata. If the CAS write fails, someone else has modified something in the ledger metadata. This concurrent modification could have been caused by recovery or {% pop rerepli [...]
+
+### Closing a ledger as a writer
+
+Closing a ledger is straightforward for a writer. The writer makes a CAS write to the metadata, changing the state to `CLOSED` and setting the last entry of the ledger to the last entry which we have acknowledged to the client.
+
+If the CAS write fails, it means someone else has modified the metadata. We reread the metadata, and retry closing as long as the state of the ledger is still `OPEN`. If the state is `IN_RECOVERY` we send an error to the client. If the state is `CLOSED` and the last entry is the same as the last entry we have acknowledged to the client, we complete the close operation successfully. If the last entry is different from what we have acknowledged to the client, we send an error to the client.
+
+### Closing a ledger as a reader
+
+A reader can also force a ledger to close. Forcing the ledger to close will prevent any writer from adding new entries to the ledger. This is called {% pop fencing %}. This can occur when a writer has crashed or become unavailable, and a new writer wants to take over writing to the log. The new writer must ensure that it has seen all updates from the previous writer, and prevent the previous writer from making any new updates before making any updates of its own.
+
+To recover a ledger, we first update the state in the metadata to IN_RECOVERY. We then send a fence message to all the bookies in the last fragment of the ledger. When a bookie receives a fence message for a ledger, the fenced state of the ledger is persisted to disk. Once we receive a response from at least (**Q<sub>w</sub>** - **Q<sub>a</sub>**)+1 bookies from each write quorum in the ensemble, the ledger is fenced.
+
+By ensuring we have received a response from at last (**Q<sub>w</sub>** - **Q<sub>a</sub>**) + 1 bookies in each write quorum, we ensure that, if the old writer is alive and tries to add a new entry there will be no write quorum in which Qa bookies will accept the write. If the old writer tries to update the ensemble, it will fail on the CAS metadata write, and then see that the ledger is in IN_RECOVERY state, and that it therefore shouldn’t try to write to it.
+
+The old writer will be able to write entries to individual bookies (we can’t guarantee that the fence message reaches all bookies), but as it will not be able reach ack quorum, it will not be able to send a success response to its client. The client will get a LedgerFenced error instead.
+
+It is important to note that when you get a ledger fenced message for an entry, it doesn’t mean that the entry has not been written. It means that the entry may or may not have been written, and this can only be determined after the ledger is recovered. In effect, LedgerFenced should be treated like a timeout.
+
+Once the ledger is fenced, recovery can begin. Recovery means finding the last entry of the ledger and closing the ledger. To find the last entry of the ledger, the client asks all bookies for the highest last add confirmed value they have seen. It waits until it has received a response at least (**Q<sub>w</sub>** - **Q<sub>a</sub>**) + 1 bookies from each write quorum, and takes the highest response as the entry id to start reading forward from. It then starts reading forward in the led [...]
+
+## Ledgers to logs
+
+In BookKeeper, {% pop ledgers %} can be used to build a replicated log for your system. All guarantees provided by BookKeeper are at the ledger level. Guarantees on the whole log can be built using the ledger guarantees and any consistent datastore with a compare-and-swap (CAS) primitive. BookKeeper uses ZooKeeper as the datastore but others could theoretically be used.
+
+A log in BookKeeper is built from some number of ledgers, with a fixed order. A ledger represents a single segment of the log. A ledger could be the whole period that one node was the leader, or there could be multiple ledgers for a single period of leadership. However, there can only ever be one leader that adds entries to a single ledger. Ledgers cannot be reopened for writing once they have been closed/recovered.
+
+> BookKeeper does *not* provide leader election. You must use a system like ZooKeeper for this.
+
+In many cases, leader election is really leader suggestion. Multiple nodes could think that they are leader at any one time. It is the job of the log to guarantee that only one can write changes to the system.
+
+### Opening a log
+
+Once a node thinks it is leader for a particular log, it must take the following steps:
+
+1. Read the list of ledgers for the log
+1. {% pop Fence %} the last two ledgers in the list. Two ledgers are fenced because because the writer may be writing to the second-to-last ledger while adding the last ledger to the list.
+1. Create a new ledger
+1. Add the new ledger to the ledger list
+1. Write the new ledger back to the datastore using a CAS operation
+
+The fencing in step 2 and the CAS operation in step 5 prevent two nodes from thinking that they have leadership at any one time.
+
+The CAS operation will fail if the list of ledgers has changed between reading it and writing back the new list. When the CAS operation fails, the leader must start at step 1 again. Even better, they should check that they are in fact still the leader with the system that is providing leader election. The protocol will work correctly without this step, though it will be able to make very little progress if two nodes think they are leader and are duelling for the log.
+
+The node must not serve any writes until step 5 completes successfully.
+
+### Rolling ledgers
+
+The leader may wish to close the current ledger and open a new one every so often. Ledgers can only be deleted as a whole. If you don't roll the log, you won't be able to clean up old entries in the log without a leader change. By closing the current ledger and adding a new one, the leader allows the log to be truncated whenever that data is no longer needed. The steps for rolling the log is similar to those for creating a new ledger.
+
+1. Create a new ledger
+1. Add the new ledger to the ledger list
+1. Write the new ledger list to the datastore using CAS
+1. Close the previous ledger
+
+By deferring the closing of the previous ledger until step 4, we can continue writing to the log while we perform metadata update operations to add the new ledger. This is safe as long as you fence the last 2 ledgers when acquiring leadership.
+
diff --git a/site/docs/4.5.0/example.md b/site/docs/4.5.0/example.md
new file mode 100644
index 0000000..7dbc697
--- /dev/null
+++ b/site/docs/4.5.0/example.md
@@ -0,0 +1,6 @@
+---
+title: Example doc
+subtitle: Just for experimentation purposes.
+---
+
+{% pop ledger %}
diff --git a/site/docs/4.5.0/getting-started/concepts.md b/site/docs/4.5.0/getting-started/concepts.md
new file mode 100644
index 0000000..7a3c928
--- /dev/null
+++ b/site/docs/4.5.0/getting-started/concepts.md
@@ -0,0 +1,202 @@
+---
+title: BookKeeper concepts and architecture
+subtitle: The core components and how they work
+prev: ../run-locally
+---
+
+BookKeeper is a service that provides persistent storage of streams of log [entries](#entries)---aka *records*---in sequences called [ledgers](#ledgers). BookKeeper replicates stored entries across multiple servers.
+
+## Basic terms
+
+In BookKeeper:
+
+* each unit of a log is an [*entry*](#entries) (aka record)
+* streams of log entries are called [*ledgers*](#ledgers)
+* individual servers storing ledgers of entries are called [*bookies*](#bookies)
+
+BookKeeper is designed to be reliable and resilient to a wide variety of failures. Bookies can crash, corrupt data, or discard data, but as long as there are enough bookies behaving correctly in the ensemble the service as a whole will behave correctly.
+
+## Entries
+
+> **Entries** contain the actual data written to ledgers, along with some important metadata.
+
+BookKeeper entries are sequences of bytes that are written to [ledgers](#ledgers). Each entry has the following fields:
+
+Field | Java type | Description
+:-----|:----------|:-----------
+Ledger number | `long` | The ID of the ledger to which the entry has been written
+Entry number | `long` | The unique ID of the entry
+Last confirmed (LC) | `long` | The ID of the last recorded entry
+Data | `byte[]` | The entry's data (written by the client application)
+Authentication code | `byte[]` | The message auth code, which includes *all* other fields in the entry
+
+## Ledgers
+
+> **Ledgers** are the basic unit of storage in BookKeeper.
+
+Ledgers are sequences of entries, while each entry is a sequence of bytes. Entries are written to a ledger:
+
+* sequentially, and
+* at most once.
+
+This means that ledgers have *append-only* semantics. Entries cannot be modified once they've been written to a ledger. Determining the proper write order is the responsbility of [client applications](#clients).
+
+## Clients and APIs
+
+> BookKeeper clients have two main roles: they create and delete ledgers, and they read entries from and write entries to ledgers.
+> 
+> BookKeeper provides both a lower-level and a higher-level API for ledger interaction.
+
+There are currently two APIs that can be used for interacting with BookKeeper:
+
+* The [ledger API](../../api/ledger-api) is a lower-level API that enables you to interact with {% pop ledgers %} directly.
+* The [DistributedLog API](../../api/distributedlog-api) is a higher-level API that enables you to use BookKeeper without directly interacting with ledgers.
+
+In general, you should choose the API based on how much granular control you need over ledger semantics. The two APIs can also both be used within a single application.
+
+## Bookies
+
+> **Bookies** are individual BookKeeper servers that handle ledgers (more specifically, fragments of ledgers). Bookies function as part of an ensemble.
+
+A bookie is an individual BookKeeper storage server. Individual bookies store fragments of ledgers, not entire ledgers (for the sake of performance). For any given ledger **L**, an *ensemble* is the group of bookies storing the entries in **L**.
+
+Whenever entries are written to a ledger, those entries are {% pop striped %} across the ensemble (written to a sub-group of bookies rather than to all bookies).
+
+### Motivation
+
+> BookKeeper was initially inspired by the NameNode server in HDFS but its uses now extend far beyond this.
+
+The initial motivation for BookKeeper comes from the [Hadoop](http://hadoop.apache.org/) ecosystem. In the [Hadoop Distributed File System](https://wiki.apache.org/hadoop/HDFS) (HDFS), a special node called the [NameNode](https://wiki.apache.org/hadoop/NameNode) logs all operations in a reliable fashion, which ensures that recovery is possible in case of crashes.
+
+The NameNode, however, served only as initial inspiration for BookKeeper. The applications for BookKeeper extend far beyond this and include essentially any application that requires an append-based storage system. BookKeeper provides a number of advantages for such applications:
+
+* Highly efficient writes
+* High fault tolerance via replication of messages within ensembles of bookies
+* High throughput for write operations via {% pop striping %} (across as many bookies as you wish)
+
+## Metadata storage
+
+BookKeeper requires a metadata storage service to store information related to [ledgers](#ledgers) and available bookies. BookKeeper currently uses [ZooKeeper](https://zookeeper.apache.org) for this and other tasks.
+
+## Data management in bookies
+
+Bookies manage data in a [log-structured](https://en.wikipedia.org/wiki/Log-structured_file_system) way, which is implemented using three types of files:
+
+* [journals](#journals)
+* [entry logs](#entry-logs)
+* [index files](#index-files)
+
+### Journals
+
+A journal file contains BookKeeper transaction logs. Before any update to a ledger takes place, the bookie ensures that a transaction describing the update is written to non-volatile storage. A new journal file is created once the bookie starts or the older journal file reaches the journal file size threshold.
+
+### Entry logs
+
+An entry log file manages the written entries received from BookKeeper clients. Entries from different ledgers are aggregated and written sequentially, while their offsets are kept as pointers in a [ledger cache](#ledger-cache) for fast lookup.
+
+A new entry log file is created once the bookie starts or the older entry log file reaches the entry log size threshold. Old entry log files are removed by the Garbage Collector Thread once they are not associated with any active ledger.
+
+### Index files
+
+An index file is created for each ledger, which comprises a header and several fixed-length index pages that record the offsets of data stored in entry log files.
+
+Since updating index files would introduce random disk I/O index files are updated lazily by a sync thread running in the background. This ensures speedy performance for updates. Before index pages are persisted to disk, they are gathered in a ledger cache for lookup.
+
+### Ledger cache
+
+Ledger indexes pages are cached in a memory pool, which allows for more efficient management of disk head scheduling.
+
+### Adding entries
+
+When a client instructs a {% pop bookie %} to write an entry to a ledger, the entry will go through the following steps to be persisted on disk:
+
+1. The entry is appended to an [entry log](#entry-logs)
+1. The index of the entry is updated in the [ledger cache](#ledger-cache)
+1. A transaction corresponding to this entry update is appended to the [journal](#journals)
+1. A response is sent to the BookKeeper client
+
+> For performance reasons, the entry log buffers entries in memory and commits them in batches, while the ledger cache holds index pages in memory and flushes them lazily. This process is described in more detail in the [Data flush](#data-flush) section below.
+
+### Data flush
+
+Ledger index pages are flushed to index files in the following two cases:
+
+* The ledger cache memory limit is reached. There is no more space available to hold newer index pages. Dirty index pages will be evicted from the ledger cache and persisted to index files.
+* A background thread synchronous thread is responsible for flushing index pages from the ledger cache to index files periodically.
+
+Besides flushing index pages, the sync thread is responsible for rolling journal files in case that journal files use too much disk space. The data flush flow in the sync thread is as follows:
+
+* A `LastLogMark` is recorded in memory. The `LastLogMark` indicates that those entries before it have been persisted (to both index and entry log files) and contains two parts:
+  1. A `txnLogId` (the file ID of a journal)
+  1. A `txnLogPos` (offset in a journal)
+* Dirty index pages are flushed from the ledger cache to the index file, and entry log files are flushed to ensure that all buffered entries in entry log files are persisted to disk.
+
+    Ideally, a bookie only needs to flush index pages and entry log files that contain entries before `LastLogMark`. There is, however, no such information in the ledger and entry log mapping to journal files. Consequently, the thread flushes the ledger cache and entry log entirely here, and may flush entries after the `LastLogMark`. Flushing more is not a problem, though, just redundant.
+* The `LastLogMark` is persisted to disk, which means that entries added before `LastLogMark` whose entry data and index page were also persisted to disk. It is now time to safely remove journal files created earlier than `txnLogId`.
+
+If the bookie has crashed before persisting `LastLogMark` to disk, it still has journal files containing entries for which index pages may not have been persisted. Consequently, when this bookie restarts, it inspects journal files to restore those entries and data isn't lost.
+
+Using the above data flush mechanism, it is safe for the sync thread to skip data flushing when the bookie shuts down. However, in the entry logger it uses a buffered channel to write entries in batches and there might be data buffered in the buffered channel upon a shut down. The bookie needs to ensure that the entry log flushes its buffered data during shutdown. Otherwise, entry log files become corrupted with partial entries.
+
+### Data compaction
+
+On bookies, entries of different ledgers are interleaved in entry log files. A bookie runs a garbage collector thread to delete un-associated entry log files to reclaim disk space. If a given entry log file contains entries from a ledger that has not been deleted, then the entry log file would never be removed and the occupied disk space never reclaimed. In order to avoid such a case, a bookie server compacts entry log files in a garbage collector thread to reclaim disk space.
+
+There are two kinds of compaction running with different frequency: minor compaction and major compaction. The differences between minor compaction and major compaction lies in their threshold value and compaction interval.
+
+* The garbage collection threshold is the size percentage of an entry log file occupied by those undeleted ledgers. The default minor compaction threshold is 0.2, while the major compaction threshold is 0.8.
+* The garbage collection interval is how frequently to run the compaction. The default minor compaction interval is 1 hour, while the major compaction threshold is 1 day.
+
+> If either the threshold or interval is set to less than or equal to zero, compaction is disabled.
+
+The data compaction flow in the garbage collector thread is as follows:
+
+* The thread scans entry log files to get their entry log metadata, which records a list of ledgers comprising an entry log and their corresponding percentages.
+* With the normal garbage collection flow, once the bookie determines that a ledger has been deleted, the ledger will be removed from the entry log metadata and the size of the entry log reduced.
+* If the remaining size of an entry log file reaches a specified threshold, the entries of active ledgers in the entry log will be copied to a new entry log file.
+* Once all valid entries have been copied, the old entry log file is deleted.
+
+## ZooKeeper metadata
+
+BookKeeper requires a ZooKeeper installation for storing [ledger](#ledger) metadata. Whenever you construct a [`BookKeeper`](../../api/javadoc/org/apache/bookkeeper/client/BookKeeper) client object, you need to pass a list of ZooKeeper servers as a parameter to the constructor, like this:
+
+```java
+String zkConnectionString = "127.0.0.1:2181";
+BookKeeper bkClient = new BookKeeper(zkConnectionString);
+```
+
+> For more info on using the BookKeeper Java client, see [this guide](../../api/ledger-api#the-java-ledger-api-client).
+
+## Ledger manager
+
+A *ledger manager* handles ledgers' metadata (which is stored in ZooKeeper). BookKeeper offers two types of ledger managers: the [flat ledger manager](#flat-ledger-manager) and the [hierarchical ledger manager](#hierarchical-ledger-manager). Both ledger managers extend the [`AbstractZkLedgerManager`](../../api/javadoc/org/apache/bookkeeper/meta/AbstractZkLedgerManager) abstract class.
+
+> #### Use the flat ledger manager in most cases
+> The flat ledger manager is the default and is recommended for nearly all use cases. The hierarchical ledger manager is better suited only for managing very large numbers of BookKeeper ledgers (> 50,000).
+
+### Flat ledger manager
+
+The *flat ledger manager*, implemented in the [`FlatLedgerManager`](../../api/javadoc/org/apache/bookkeeper/meta/FlatLedgerManager.html) class, stores all ledgers' metadata in child nodes of a single ZooKeeper path. The flat ledger manager creates [sequential nodes](https://zookeeper.apache.org/doc/trunk/zookeeperProgrammers.html#Sequence+Nodes+--+Unique+Naming) to ensure the uniqueness of the ledger ID and prefixes all nodes with `L`. Bookie servers manage their own active ledgers in a  [...]
+
+The flat ledger manager's garbage collection follow proceeds as follows:
+
+* All existing ledgers are fetched from ZooKeeper (`zkActiveLedgers`)
+* All ledgers currently active within the bookie are fetched (`bkActiveLedgers`)
+* The currently actively ledgers are looped through to determine which ledgers don't currently exist in ZooKeeper. Those are then garbage collected.
+* The *hierarchical ledger manager* stores ledgers' metadata in two-level [znodes](https://zookeeper.apache.org/doc/current/zookeeperOver.html#Nodes+and+ephemeral+nodes).
+
+### Hierarchical ledger manager
+
+The *hierarchical ledger manager*, implemented in the [`HierarchicalLedgerManager`](../../api/javadoc/org/apache/bookkeeper/meta/HierarchicalLedgerManager) class, first obtains a global unique ID from ZooKeeper using an [`EPHEMERAL_SEQUENTIAL`](https://zookeeper.apache.org/doc/current/api/org/apache/zookeeper/CreateMode.html#EPHEMERAL_SEQUENTIAL) znode. Since ZooKeeper's sequence counter has a format of `%10d` (10 digits with 0 padding, for example `<path>0000000001`), the hierarchical l [...]
+
+```shell
+{level1 (2 digits)}{level2 (4 digits)}{level3 (4 digits)}
+```
+
+These three parts are used to form the actual ledger node path to store ledger metadata:
+
+```shell
+{ledgers_root_path}/{level1}/{level2}/L{level3}
+```
+
+For example, ledger 0000000001 is split into three parts, 00, 0000, and 00001, and stored in znode `/{ledgers_root_path}/00/0000/L0001`. Each znode could have as many 10,000 ledgers, which avoids the problem of the child list being larger than the maximum ZooKeeper packet size (which is the [limitation](https://issues.apache.org/jira/browse/BOOKKEEPER-39) that initially prompted the creation of the hierarchical ledger manager).
diff --git a/site/docs/4.5.0/getting-started/installation.md b/site/docs/4.5.0/getting-started/installation.md
new file mode 100644
index 0000000..ed77bce
--- /dev/null
+++ b/site/docs/4.5.0/getting-started/installation.md
@@ -0,0 +1,74 @@
+---
+title: BookKeeper installation
+subtitle: Download or clone BookKeeper and build it locally
+next: ../run-locally
+---
+
+{% capture download_url %}http://apache.claz.org/bookkeeper/bookkeeper-{{ site.stable_release }}/bookkeeper-{{ site.stable_release }}-src.tar.gz{% endcapture %}
+
+You can install BookKeeper either by [downloading](#download) a [GZipped](http://www.gzip.org/) tarball package or [cloning](#clone) the BookKeeper repository.
+
+## Requirements
+
+* [Unix environment](http://www.opengroup.org/unix)
+* [Java Development Kit 1.6](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or later
+* [Maven 3.0](https://maven.apache.org/install.html) or later
+
+## Download
+
+You can download Apache BookKeeper releases from one of many [Apache mirrors](http://www.apache.org/dyn/closer.cgi/bookkeeper). Here's an example for the [apache.claz.org](http://apache.claz.org/bookkeeper) mirror:
+
+```shell
+$ curl -O {{ download_url }}
+$ tar xvf bookkeeper-{{ site.stable_release }}-src.tar.gz
+$ cd bookkeeper-{{ site.stable_release }}
+```
+
+## Clone
+
+To build BookKeeper from source, clone the repository, either from the [GitHub mirror]({{ site.github_repo }}) or from the [Apache repository](http://git.apache.org/bookkeeper.git/):
+
+```shell
+# From the GitHub mirror
+$ git clone {{ site.github_repo}}
+
+# From Apache directly
+$ git clone git://git.apache.org/bookkeeper.git/
+```
+
+## Build using Maven
+
+Once you have the BookKeeper on your local machine, either by [downloading](#download) or [cloning](#clone) it, you can then build BookKeeper from source using Maven:
+
+```shell
+$ mvn package
+```
+
+> You can skip tests by adding the `-DskipTests` flag when running `mvn package`.
+
+### Useful Maven commands
+
+Some other useful Maven commands beyond `mvn package`:
+
+Command | Action
+:-------|:------
+`mvn clean` | Removes build artifacts
+`mvn compile` | Compiles JAR files from Java sources
+`mvn compile findbugs:findbugs` | Compile using the Maven [FindBugs](http://gleclaire.github.io/findbugs-maven-plugin) plugin
+`mvn install` | Install the BookKeeper JAR locally in your local Maven cache (usually in the `~/.m2` directory)
+`mvn deploy` | Deploy the BookKeeper JAR to the Maven repo (if you have the proper credentials)
+`mvn verify` | Performs a wide variety of verification and validation tasks
+`mvn apache-rat:check` | Run Maven using the [Apache Rat](http://creadur.apache.org/rat/apache-rat-plugin/) plugin
+`mvn compile javadoc:aggregate` | Build Javadocs locally
+`mvn package assembly:single` | Build a complete distribution using the Maven [Assembly](http://maven.apache.org/plugins/maven-assembly-plugin/) plugin
+
+## Package directory
+
+The BookKeeper project contains several subfolders that you should be aware of:
+
+Subfolder | Contains
+:---------|:--------
+[`bookkeeper-server`]({{ site.github_repo }}/tree/master/bookkeeper-server) | The BookKeeper server and client
+[`bookkeeper-benchmark`]({{ site.github_repo }}/tree/master/bookkeeper-benchmark) | A benchmarking suite for measuring BookKeeper performance
+[`bookkeeper-stats`]({{ site.github_repo }}/tree/master/bookkeeper-stats) | A BookKeeper stats library
+[`bookkeeper-stats-providers`]({{ site.github_repo }}/tree/master/bookkeeper-stats-providers) | BookKeeper stats providers
diff --git a/site/docs/4.5.0/getting-started/run-locally.md b/site/docs/4.5.0/getting-started/run-locally.md
new file mode 100644
index 0000000..ab33642
--- /dev/null
+++ b/site/docs/4.5.0/getting-started/run-locally.md
@@ -0,0 +1,16 @@
+---
+title: Run bookies locally
+prev: ../installation
+next: ../concepts
+toc_disable: true
+---
+
+{% pop Bookies %} are individual BookKeeper servers. You can run an ensemble of bookies locally on a single machine using the [`localbookie`](../../reference/cli#bookkeeper-localbookie) command of the `bookkeeper` CLI tool and specifying the number of bookies you'd like to include in the ensemble.
+
+This would start up an ensemble with 10 bookies:
+
+```shell
+$ bookeeper-server/bin/bookeeper localbookie 10
+```
+
+> When you start up an ensemble using `localbookie`, all bookies run in a single JVM process.
diff --git a/site/docs/latest/index.md b/site/docs/4.5.0/overview/overview.md
similarity index 66%
copy from site/docs/latest/index.md
copy to site/docs/4.5.0/overview/overview.md
index ad8d47a..f2ff5b8 100644
--- a/site/docs/latest/index.md
+++ b/site/docs/4.5.0/overview/overview.md
@@ -1,5 +1,5 @@
 ---
-title: Apache BookKeeper 4.5.0-SNAPSHOT Documentation 
+title: Apache BookKeeper 4.5.0 Documentation 
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -20,7 +20,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This documentation is for Apache BookKeeper version `{{ site.latest_version }}`.
+This documentation is for Apache BookKeeper version `4.5.0`.
 
 Apache BookKeeper is a scalable, fault tolerant and low latency storage service optimized for realtime workloads.
 It offers `durability`, `replication` and `strong consistency` as essentials for building reliable real-time applications.
@@ -34,23 +34,23 @@ It is suitable for being used in following scenerios:
 
 Learn more about Apache BookKeeper and what it can do for your organization:
 
-- [Apache BookKeeper {{ site.latest_version }} Release Notes](./releaseNotes)
+- [Apache BookKeeper 4.5.0 Release Notes](../releaseNotes)
 
 Or start using Apache BookKeeper today.
 
 ### Users 
 
-- **Concepts**: Start with [concepts](./getting-started/concepts). This will help you to fully understand
+- **Concepts**: Start with [concepts](../../getting-started/concepts). This will help you to fully understand
     the other parts of the documentation, including the setup, integration and operation guides.
-- **Getting Started**: Install [Apache BookKeeper](./getting-started/installation) and run bookies [locally](./getting-started/run-locally)
-- **API**: Read the [API](./api/overview) documentation to learn how to use Apache BookKeeper to build your applications.
-- **Deployment**: The [Deployment Guide](./deployment/manual) shows how to deploy Apache BookKeeper to production clusters.
+- **Getting Started**: Install [Apache BookKeeper](../../getting-started/installation) and run bookies [locally](../../getting-started/run-locally)
+- **API**: Read the [API](../../api/overview) documentation to learn how to use Apache BookKeeper to build your applications.
+- **Deployment**: The [Deployment Guide](../../deployment/manual) shows how to deploy Apache BookKeeper to production clusters.
 
 ### Administrators
 
-- **Operations**: The [Admin Guide](./admin) shows how to run Apache BookKeeper on production, what are the production
+- **Operations**: The [Admin Guide](../../admin/bookies) shows how to run Apache BookKeeper on production, what are the production
     considerations and best practices.
 
 ### Contributors
 
-- **Details**: Learn [design details](./development/protocol) to know more internals.
+- **Details**: Learn [design details](../../development/protocol) to know more internals.
diff --git a/site/docs/4.5.0/overview/releaseNotes.md b/site/docs/4.5.0/overview/releaseNotes.md
new file mode 100644
index 0000000..c7845ae
--- /dev/null
+++ b/site/docs/4.5.0/overview/releaseNotes.md
@@ -0,0 +1,17 @@
+---
+title: Apache BookKeeper 4.5.0 Release Notes
+---
+
+[provide a summary of this release]
+
+Apache BookKeeper users are encouraged to upgrade to 4.5.0. The technical details of this release are summarized
+below.
+
+## Highlights
+
+[List the highlights]
+
+## Details
+
+[list to issues list]
+
diff --git a/site/docs/4.5.0/overview/releaseNotesTemplate.md b/site/docs/4.5.0/overview/releaseNotesTemplate.md
new file mode 100644
index 0000000..c7845ae
--- /dev/null
+++ b/site/docs/4.5.0/overview/releaseNotesTemplate.md
@@ -0,0 +1,17 @@
+---
+title: Apache BookKeeper 4.5.0 Release Notes
+---
+
+[provide a summary of this release]
+
+Apache BookKeeper users are encouraged to upgrade to 4.5.0. The technical details of this release are summarized
+below.
+
+## Highlights
+
+[List the highlights]
+
+## Details
+
+[list to issues list]
+
diff --git a/site/docs/4.5.0/reference/cli.md b/site/docs/4.5.0/reference/cli.md
new file mode 100644
index 0000000..8beb36f
--- /dev/null
+++ b/site/docs/4.5.0/reference/cli.md
@@ -0,0 +1,10 @@
+---
+title: BookKeeper CLI tool reference
+subtitle: A reference guide to the command-line tools that you can use to administer BookKeeper
+---
+
+{% include cli.html id="bookkeeper" %}
+
+## The BookKeeper shell
+
+{% include shell.html %}
diff --git a/site/docs/latest/reference/config.md b/site/docs/4.5.0/reference/config.md
similarity index 90%
copy from site/docs/latest/reference/config.md
copy to site/docs/4.5.0/reference/config.md
index 6a420d0..8997b6b 100644
--- a/site/docs/latest/reference/config.md
+++ b/site/docs/4.5.0/reference/config.md
@@ -6,4 +6,4 @@ subtitle: A reference guide to all of BookKeeper's configurable parameters
 
 The table below lists parameters that you can set to configure {% pop bookies %}. All configuration takes place in the `bk_server.conf` file in the `bookkeeper-server/conf` directory of your [BookKeeper installation](../../getting-started/installing).
 
-{% include config.html id="bk_server" %}
\ No newline at end of file
+{% include config.html id="bk_server" %}
diff --git a/site/docs/4.5.0/reference/metrics.md b/site/docs/4.5.0/reference/metrics.md
new file mode 100644
index 0000000..8bd6fe0
--- /dev/null
+++ b/site/docs/4.5.0/reference/metrics.md
@@ -0,0 +1,3 @@
+---
+title: BookKeeper metrics reference
+---
diff --git a/site/docs/4.5.0/security/overview.md b/site/docs/4.5.0/security/overview.md
new file mode 100644
index 0000000..62da8ed
--- /dev/null
+++ b/site/docs/4.5.0/security/overview.md
@@ -0,0 +1,21 @@
+---
+title: BookKeeper Security
+next: ../tls
+---
+
+In the 4.5.0 release, the BookKeeper community added a number of features that can be used, together or separately, to secure a BookKeeper cluster.
+The following security measures are currently supported:
+
+1. Authentication of connections to bookies from clients, using either [TLS](../tls) or [SASL (Kerberos)](../sasl).
+2. Authentication of connections from clients, bookies, autorecovery daemons to [ZooKeeper](../zookeeper), when using zookeeper based ledger managers.
+3. Encryption of data transferred between bookies and clients, between bookies and autorecovery daemons using [TLS](../tls).
+
+It’s worth noting that security is optional - non-secured clusters are supported, as well as a mix of authenticated, unauthenticated, encrypted and non-encrypted clients.
+
+NOTE: currently `authorization` is not yet available in `4.5.0`. The Apache BookKeeper community is looking for adding this feature in subsequent releases.
+
+## Next Steps
+
+- [Encryption and Authentication using TLS](../tls)
+- [Authentication using SASL](../sasl)
+- [ZooKeeper Authentication](../zookeeper)
diff --git a/site/docs/4.5.0/security/sasl.md b/site/docs/4.5.0/security/sasl.md
new file mode 100644
index 0000000..ffb972a
--- /dev/null
+++ b/site/docs/4.5.0/security/sasl.md
@@ -0,0 +1,202 @@
+---
+title: Authentication using SASL
+prev: ../tls
+next: ../zookeeper
+---
+
+Bookies support client authentication via SASL. Currently we only support GSSAPI (Kerberos). We will start
+with a general description of how to configure `SASL` for bookies, clients and autorecovery daemons, followed
+by mechanism-specific details and wrap up with some operational details.
+
+## SASL configuration for Bookies
+
+1. Select the mechanisms to enable in the bookies. `GSSAPI` is the only mechanism currently supported by BookKeeper.
+2. Add a `JAAS` config file for the selected mechanisms as described in the examples for setting up [GSSAPI (Kerberos)](#kerberos).
+3. Pass the `JAAS` config file location as JVM parameter to each Bookie. For example:
+
+    ```shell
+    -Djava.security.auth.login.config=/etc/bookkeeper/bookie_jaas.conf 
+    ```
+
+4. Enable SASL auth plugin in bookies, by setting `bookieAuthProviderFactoryClass` to `org.apache.bookkeeper.sasl.SASLBookieAuthProviderFactory`.
+
+
+    ```shell
+    bookieAuthProviderFactoryClass=org.apache.bookkeeper.sasl.SASLBookieAuthProviderFactory
+    ```
+
+5. If you are running `autorecovery` along with bookies, then you want to enable SASL auth plugin for `autorecovery`, by setting
+    `clientAuthProviderFactoryClass` to `org.apache.bookkeeper.sasl.SASLClientProviderFactory`.
+
+    ```shell
+    clientAuthProviderFactoryClass=org.apache.bookkeeper.sasl.SASLClientProviderFactory
+    ```
+
+6. Follow the steps in [GSSAPI (Kerberos)](#kerberos) to configure SASL.
+
+#### <a name="notes"></a> Important Notes
+
+1. `Bookie` is a section name in the JAAS file used by each bookie. This section tells the bookie which principal to use
+    and the location of the keytab where the principal is stored. It allows the bookie to login using the keytab specified in this section.
+2. `Auditor` is a section name in the JASS file used by `autorecovery` daemon (it can be co-run with bookies). This section tells the
+    `autorecovery` daemon which principal to use and the location of the keytab where the principal is stored. It allows the bookie to
+    login using the keytab specified in this section.
+3. The `Client` section is used to authenticate a SASL connection with ZooKeeper. It also allows the bookies to set ACLs on ZooKeeper nodes
+    which locks these nodes down so that only the bookies can modify it. It is necessary to have the same primary name across all bookies.
+    If you want to use a section name other than `Client`, set the system property `zookeeper.sasl.client` to the appropriate name
+    (e.g `-Dzookeeper.sasl.client=ZKClient`).
+4. ZooKeeper uses `zookeeper` as the service name by default. If you want to change this, set the system property
+    `zookeeper.sasl.client.username` to the appropriate name (e.g. `-Dzookeeper.sasl.client.username=zk`).
+
+## SASL configuration for Clients
+
+To configure `SASL` authentication on the clients:
+
+1. Select a `SASL` mechanism for authentication and add a `JAAS` config file for the selected mechanism as described in the examples for
+    setting up [GSSAPI (Kerberos)](#kerberos).
+2. Pass the `JAAS` config file location as JVM parameter to each client JVM. For example:
+
+    ```shell
+    -Djava.security.auth.login.config=/etc/bookkeeper/bookkeeper_jaas.conf 
+    ```
+
+3. Configure the following properties in bookkeeper `ClientConfiguration`:
+
+    ```shell
+    clientAuthProviderFactoryClass=org.apache.bookkeeper.sasl.SASLClientProviderFactory
+    ```
+
+Follow the steps in [GSSAPI (Kerberos)](#kerberos) to configure SASL for the selected mechanism.
+
+## <a name="kerberos"></a> Authentication using SASL/Kerberos
+
+### Prerequisites
+
+#### Kerberos
+
+If your organization is already using a Kerberos server (for example, by using `Active Directory`), there is no need to
+install a new server just for BookKeeper. Otherwise you will need to install one, your Linux vendor likely has packages
+for `Kerberos` and a short guide on how to install and configure it ([Ubuntu](https://help.ubuntu.com/community/Kerberos),
+[Redhat](https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html)).
+Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to `$JAVA_HOME/jre/lib/security`.
+
+#### Kerberos Principals
+
+If you are using the organization’s Kerberos or Active Directory server, ask your Kerberos administrator for a principal
+for each Bookie in your cluster and for every operating system user that will access BookKeeper with Kerberos authentication
+(via clients and tools).
+
+If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
+
+```shell
+sudo /usr/sbin/kadmin.local -q 'addprinc -randkey bookkeeper/{hostname}@{REALM}'
+sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab bookkeeper/{hostname}@{REALM}"
+```
+
+##### All hosts must be reachable using hostnames
+
+It is a *Kerberos* requirement that all your hosts can be resolved with their FQDNs.
+
+### Configuring Bookies
+
+1. Add a suitably modified JAAS file similar to the one below to each Bookie’s config directory, let’s call it `bookie_jaas.conf`
+for this example (note that each bookie should have its own keytab):
+
+    ```
+    Bookie {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/etc/security/keytabs/bookie.keytab"
+        principal="bookkeeper/bk1.hostname.com@EXAMPLE.COM";
+    };
+    // ZooKeeper client authentication
+    Client {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/etc/security/keytabs/bookie.keytab"
+        principal="bookkeeper/bk1.hostname.com@EXAMPLE.COM";
+    };
+    // If you are running `autorecovery` along with bookies
+    Auditor {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/etc/security/keytabs/bookie.keytab"
+        principal="bookkeeper/bk1.hostname.com@EXAMPLE.COM";
+    };
+    ```
+
+    The `Bookie` section in the JAAS file tells the bookie which principal to use and the location of the keytab where this principal is stored.
+    It allows the bookie to login using the keytab specified in this section. See [notes](#notes) for more details on Zookeeper’s SASL configuration.
+
+2. Pass the name of the JAAS file as a JVM parameter to each Bookie:
+
+    ```shell
+    -Djava.security.auth.login.config=/etc/bookkeeper/bookie_jaas.conf
+    ```
+
+    You may also wish to specify the path to the `krb5.conf` file
+    (see [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details):
+
+    ```shell
+    -Djava.security.krb5.conf=/etc/bookkeeper/krb5.conf
+    ```
+
+3. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting the Bookies.
+
+4. Enable SASL authentication plugin in the bookies by setting following parameters.
+
+    ```shell
+    bookieAuthProviderFactoryClass=org.apache.bookkeeper.sasl.SASLBookieAuthProviderFactory
+    # if you run `autorecovery` along with bookies
+    clientAuthProviderFactoryClass=org.apache.bookkeeper.sasl.SASLClientProviderFactory
+    ```
+
+### Configuring Clients
+
+To configure SASL authentication on the clients:
+
+1. Clients will authenticate to the cluster with their own principal (usually with the same name as the user running the client),
+    so obtain or create these principals as needed. Then create a `JAAS` file for each principal. The `BookKeeper` section describes
+    how the clients like writers and readers can connect to the Bookies. The following is an example configuration for a client using
+    a keytab (recommended for long-running processes):
+
+    ```
+    BookKeeper {
+        com.sun.security.auth.module.Krb5LoginModule required
+        useKeyTab=true
+        storeKey=true
+        keyTab="/etc/security/keytabs/bookkeeper.keytab"
+        principal="bookkeeper-client-1@EXAMPLE.COM";
+    };
+    ```
+
+
+2. Pass the name of the JAAS file as a JVM parameter to the client JVM:
+
+    ```shell
+    -Djava.security.auth.login.config=/etc/bookkeeper/bookkeeper_jaas.conf
+    ```
+
+    You may also wish to specify the path to the `krb5.conf` file (see
+    [JDK’s Kerberos Requirements](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html) for more details).
+
+    ```shell
+    -Djava.security.krb5.conf=/etc/bookkeeper/krb5.conf
+    ```
+
+
+3. Make sure the keytabs configured in the `bookkeeper_jaas.conf` are readable by the operating system user who is starting bookkeeper client.
+
+4. Enable SASL authentication plugin in the client by setting following parameters.
+
+    ```shell
+    clientAuthProviderFactoryClass=org.apache.bookkeeper.sasl.SASLClientProviderFactory
+    ```
+
+## Enabling Logging for SASL
+
+To enable SASL debug output, you can set `sun.security.krb5.debug` system property to `true`.
+
diff --git a/site/docs/4.5.0/security/tls.md b/site/docs/4.5.0/security/tls.md
new file mode 100644
index 0000000..cd250ab
--- /dev/null
+++ b/site/docs/4.5.0/security/tls.md
@@ -0,0 +1,210 @@
+---
+title: Encryption and Authentication using TLS
+prev: ../overview
+next: ../sasl
+---
+
+Apache BookKeeper allows clients and autorecovery daemons to communicate over TLS, although this is not enabled by default.
+
+## Overview
+
+The bookies need their own key and certificate in order to use TLS. Clients can optionally provide a key and a certificate
+for mutual authentication.  Each bookie or client can also be configured with a truststore, which is used to
+determine which certificates (bookie or client identities) to trust (authenticate).
+
+The truststore can be configured in many ways. To understand the truststore, consider the following two examples:
+
+1. the truststore contains one or many certificates;
+2. it contains a certificate authority (CA).
+
+In (1), with a list of certificates, the bookie or client will trust any certificate listed in the truststore.
+In (2), with a CA, the bookie or client will trust any certificate that was signed by the CA in the truststore.
+
+(TBD: benefits)
+
+## <a name="bookie-keystore"></a> Generate TLS key and certificate
+
+The first step of deploying TLS is to generate the key and the certificate for each machine in the cluster.
+You can use Java’s `keytool` utility to accomplish this task. We will generate the key into a temporary keystore
+initially so that we can export and sign it later with CA.
+
+```shell
+keytool -keystore bookie.keystore.jks -alias localhost -validity {validity} -genkey
+```
+
+You need to specify two parameters in the above command:
+
+1. `keystore`: the keystore file that stores the certificate. The *keystore* file contains the private key of
+    the certificate; hence, it needs to be kept safely.
+2. `validity`: the valid time of the certificate in days.
+
+<div class="alert alert-success">
+Ensure that common name (CN) matches exactly with the fully qualified domain name (FQDN) of the server.
+The client compares the CN with the DNS domain name to ensure that it is indeed connecting to the desired server, not a malicious one.
+</div>
+
+## Creating your own CA
+
+After the first step, each machine in the cluster has a public-private key pair, and a certificate to identify the machine.
+The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine.
+
+Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster.
+A `certificate authority (CA)` is responsible for signing certificates. CA works likes a government that issues passports —
+the government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps
+to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed
+certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have
+high assurance that they are connecting to the authentic machines.
+
+```shell
+openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
+```
+
+The generated CA is simply a *public-private* key pair and certificate, and it is intended to sign other certificates.
+
+The next step is to add the generated CA to the clients' truststore so that the clients can trust this CA:
+
+```shell
+keytool -keystore bookie.truststore.jks -alias CARoot -import -file ca-cert
+```
+
+NOTE: If you configure the bookies to require client authentication by setting `sslClientAuthentication` to `true` on the
+[bookie config](../../reference/config), then you must also provide a truststore for the bookies and it should have all the CA
+certificates that clients keys were signed by.
+
+```shell
+keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
+```
+
+In contrast to the keystore, which stores each machine’s own identity, the truststore of a client stores all the certificates
+that the client should trust. Importing a certificate into one’s truststore also means trusting all certificates that are signed
+by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that
+it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying TLS on a large BookKeeper cluster.
+You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA.
+That way all machines can authenticate all other machines.
+
+## Signing the certificate
+
+The next step is to sign all certificates in the keystore with the CA we generated. First, you need to export the certificate from the keystore:
+
+```shell
+keytool -keystore bookie.keystore.jks -alias localhost -certreq -file cert-file
+```
+
+Then sign it with the CA:
+
+```shell
+openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
+```
+
+Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:
+
+```shell
+keytool -keystore bookie.keystore.jks -alias CARoot -import -file ca-cert
+keytool -keystore bookie.keystore.jks -alias localhost -import -file cert-signed
+```
+
+The definitions of the parameters are the following:
+
+1. `keystore`: the location of the keystore
+2. `ca-cert`: the certificate of the CA
+3. `ca-key`: the private key of the CA
+4. `ca-password`: the passphrase of the CA
+5. `cert-file`: the exported, unsigned certificate of the bookie
+6. `cert-signed`: the signed certificate of the bookie
+
+(TBD: add a script to automatically generate truststores and keystores.)
+
+## Configuring Bookies
+
+Bookies support TLS for connections on the same service port. In order to enable TLS, you need to configure `tlsProvider` to be either
+`JDK` or `OpenSSL`. If `OpenSSL` is configured, it will use `netty-tcnative-boringssl-static`, which loads a corresponding binding according
+to the platforms to run bookies.
+
+> Current `OpenSSL` implementation doesn't depend on the system installed OpenSSL library. If you want to leverage the OpenSSL installed on
+the system, you can check [this example](http://netty.io/wiki/forked-tomcat-native.html) on how to replaces the JARs on the classpath with
+netty bindings to leverage installed OpenSSL.
+
+The following TLS configs are needed on the bookie side:
+
+```shell
+tlsProvider=OpenSSL
+# key store
+tlsKeyStoreType=JKS
+tlsKeyStore=/var/private/tls/bookie.keystore.jks
+tlsKeyStorePasswordPath=/var/private/tls/bookie.keystore.passwd
+# trust store
+tlsTrustStoreType=JKS
+tlsTrustStore=/var/private/tls/bookie.truststore.jks
+tlsTrustStorePasswordPath=/var/private/tls/bookie.truststore.passwd
+```
+
+NOTE: it is important to restrict access to the store files and corresponding password files via filesystem permissions.
+
+Optional settings that are worth considering:
+
+1. tlsClientAuthentication=false: Enable/Disable using TLS for authentication. This config when enabled will authenticate the other end
+    of the communication channel. It should be enabled on both bookies and clients for mutual TLS.
+2. tlsEnabledCipherSuites= A cipher suite is a named combination of authentication, encryption, MAC and key exchange
+    algorithm used to negotiate the security settings for a network connection using TLS network protocol. By default,
+    it is null. [OpenSSL Ciphers](https://www.openssl.org/docs/man1.0.2/apps/ciphers.html)
+    [JDK Ciphers](http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#ciphersuites)
+3. tlsEnabledProtocols = TLSv1.2,TLSv1.1,TLSv1 (list out the TLS protocols that you are going to accept from clients).
+    By default, it is not set.
+
+To verify the bookie's keystore and truststore are setup correctly you can run the following command:
+
+```shell
+openssl s_client -debug -connect localhost:3181 -tls1
+```
+
+NOTE: TLSv1 should be listed under `tlsEnabledProtocols`.
+
+In the output of this command you should see the server's certificate:
+
+```shell
+-----BEGIN CERTIFICATE-----
+{variable sized random bytes}
+-----END CERTIFICATE-----
+```
+
+If the certificate does not show up or if there are any other error messages then your keystore is not setup correctly.
+
+## Configuring Clients
+
+TLS is supported only for the new BookKeeper client (BookKeeper versions 4.5.0 and higher), the older clients are not
+supported. The configs for TLS will be the same as bookies.
+
+If client authentication is not required by the bookies, the following is a minimal configuration example:
+
+```shell
+tlsProvider=OpenSSL
+clientTrustStore=/var/private/tls/client.truststore.jks
+clientTrustStorePasswordPath=/var/private/tls/client.truststore.passwd
+```
+
+If client authentication is required, then a keystore must be created for each client, and the bookies' truststores must
+trust the certificate in the client's keystore. This may be done using commands that are similar to what we used for
+the [bookie keystore](#bookie-keystore).
+
+And the following must also be configured:
+
+```shell
+tlsClientAuthentication=true
+clientKeyStore=/var/private/tls/client.keystore.jks
+clientKeyStorePasswordPath=/var/private/tls/client.keystore.passwd
+```
+
+NOTE: it is important to restrict access to the store files and corresponding password files via filesystem permissions.
+
+(TBD: add example to use tls in bin/bookkeeper script?)
+
+## Enabling TLS Logging
+
+You can enable TLS debug logging at the JVM level by starting the bookies and/or clients with `javax.net.debug` system property. For example:
+
+```shell
+-Djavax.net.debug=all
+```
+
+You can find more details on this in [Oracle documentation](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html) on
+[debugging SSL/TLS connections](http://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/ReadDebug.html).
diff --git a/site/docs/4.5.0/security/zookeeper.md b/site/docs/4.5.0/security/zookeeper.md
new file mode 100644
index 0000000..e16be69
--- /dev/null
+++ b/site/docs/4.5.0/security/zookeeper.md
@@ -0,0 +1,41 @@
+---
+title: ZooKeeper Authentication
+prev: ../sasl
+---
+
+## New Clusters
+
+To enable `ZooKeeper` authentication on Bookies or Clients, there are two necessary steps:
+
+1. Create a `JAAS` login file and set the appropriate system property to point to it as described in [GSSAPI (Kerberos)](../sasl#notes).
+2. Set the configuration property `zkEnableSecurity` in each bookie to `true`.
+
+The metadata stored in `ZooKeeper` is such that only certain clients will be able to modify and read the corresponding znodes.
+The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of znodes can cause cluster
+disruption.
+
+## Migrating Clusters
+
+If you are running a version of BookKeeper that does not support security or simply with security disabled, and you want to make the cluster secure,
+then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations.
+
+1. Perform a rolling restart setting the `JAAS` login file, which enables bookie or clients to authenticate. At the end of the rolling restart,
+    bookies (or clients) are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs.
+2. Perform a second rolling restart of bookies, this time setting the configuration parameter `zkEnableSecurity` to true, which enables the use
+    of secure ACLs when creating znodes.
+3. Currently we don't have provide a tool to set acls on old znodes. You are recommended to set it manually using ZooKeeper tools.
+
+It is also possible to turn off authentication in a secured cluster. To do it, follow these steps:
+
+1. Perform a rolling restart of bookies setting the `JAAS` login file, which enable bookies to authenticate, but setting `zkEnableSecurity` to `false`.
+    At the end of rolling restart, bookies stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes.
+2. You can use ZooKeeper tools to manually reset all ACLs under the znode set in `zkLedgersRootPath`, which defaults to `/ledgers`.
+3. Perform a second rolling restart of bookies, this time omitting the system property that sets the `JAAS` login file.
+
+## Migrating the ZooKeeper ensemble
+
+It is also necessary to enable authentication on the `ZooKeeper` ensemble. To do it, we need to perform a rolling restart of the ensemble and
+set a few properties. Please refer to the ZooKeeper documentation for more details.
+
+1. [Apache ZooKeeper Documentation](http://zookeeper.apache.org/doc/r3.4.6/zookeeperProgrammers.html#sc_ZooKeeperAccessControl)
+2. [Apache ZooKeeper Wiki](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL)
diff --git a/site/docs/latest/admin/bookies.md b/site/docs/latest/admin/bookies.md
index d9c6959..f9b1dcf 100644
--- a/site/docs/latest/admin/bookies.md
+++ b/site/docs/latest/admin/bookies.md
@@ -177,4 +177,4 @@ If the change was the result of an accidental configuration change, the change c
      192.168.1.10:3181
    ```
 
-   See the [AutoRecovery](../autorecovery) documentation for more info on the re-replication process.
\ No newline at end of file
+   See the [AutoRecovery](../autorecovery) documentation for more info on the re-replication process.
diff --git a/site/docs/latest/admin/metrics.md b/site/docs/latest/admin/metrics.md
index e2595d6..635135f 100644
--- a/site/docs/latest/admin/metrics.md
+++ b/site/docs/latest/admin/metrics.md
@@ -38,4 +38,4 @@ To enable stats:
 <!-- ## Enabling stats in the bookkeeper library
 
 TODO
--->
\ No newline at end of file
+-->
diff --git a/site/docs/latest/deployment/manual.md b/site/docs/latest/deployment/manual.md
index 654595f..daafd55 100644
--- a/site/docs/latest/deployment/manual.md
+++ b/site/docs/latest/deployment/manual.md
@@ -53,4 +53,4 @@ Once cluster metadata formatting has been completed, your BookKeeper cluster is
 ## AutoRecovery
 
 [this guide](../../admin/autorecovery)
--->
\ No newline at end of file
+-->
diff --git a/site/docs/latest/index.md b/site/docs/latest/index.md
index ad8d47a..39f4eb9 100644
--- a/site/docs/latest/index.md
+++ b/site/docs/latest/index.md
@@ -1,5 +1,5 @@
 ---
-title: Apache BookKeeper 4.5.0-SNAPSHOT Documentation 
+title: Apache BookKeeper 4.6.0-SNAPSHOT Documentation 
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/site/docs/latest/reference/config.md b/site/docs/latest/reference/config.md
index 6a420d0..8997b6b 100644
--- a/site/docs/latest/reference/config.md
+++ b/site/docs/latest/reference/config.md
@@ -6,4 +6,4 @@ subtitle: A reference guide to all of BookKeeper's configurable parameters
 
 The table below lists parameters that you can set to configure {% pop bookies %}. All configuration takes place in the `bk_server.conf` file in the `bookkeeper-server/conf` directory of your [BookKeeper installation](../../getting-started/installing).
 
-{% include config.html id="bk_server" %}
\ No newline at end of file
+{% include config.html id="bk_server" %}
diff --git a/site/docs/latest/releaseNotes.md b/site/docs/latest/releaseNotes.md
index 08bde01..668c5bc 100644
--- a/site/docs/latest/releaseNotes.md
+++ b/site/docs/latest/releaseNotes.md
@@ -1,5 +1,5 @@
 ---
-title: Apache BookKeeper 4.5.0-SNAPSHOT Release Notes
+title: Apache BookKeeper 4.6.0-SNAPSHOT Release Notes
 ---
 
 Apache BookKeeper {{ site.latest_version }} is still under developement.
diff --git a/site/docs/latest/releaseNotesTemplate.md b/site/docs/latest/releaseNotesTemplate.md
index 15822e6..7fcc5dc 100644
--- a/site/docs/latest/releaseNotesTemplate.md
+++ b/site/docs/latest/releaseNotesTemplate.md
@@ -1,5 +1,5 @@
 ---
-title: Apache BookKeeper 4.5.0-SNAPSHOT Release Notes
+title: Apache BookKeeper 4.6.0-SNAPSHOT Release Notes
 ---
 
 [provide a summary of this release]
diff --git a/site/releases.md b/site/releases.md
index c54419c..7680f19 100644
--- a/site/releases.md
+++ b/site/releases.md
@@ -27,6 +27,10 @@ Client Guide | API docs
 
 ## News
 
+### [date] Release 4.5.0 available
+
+[INSERT SUMMARY]
+
 ### 16 May, 2016: release 4.4.0 available
 
 This is the fourth release of BookKeeper as an Apache Top Level Project!

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 09/10: ISSUE #356: Release notes 4.5.0

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 06178108f4995775f0c7744991c0882119e00d37
Author: Sijie Guo <si...@apache.org>
AuthorDate: Thu Aug 10 13:28:33 2017 -0700

    ISSUE #356: Release notes 4.5.0
    
    Descriptions of the changes in this PR:
    
    - summary for release 4.5.0
    - highlights for 4.5.0
    - full list of JIRA and Github issues.
    
    Author: Sijie Guo <si...@apache.org>
    
    Reviewers: Enrico Olivelli <eo...@gmail.com>, Jia Zhai <None>, Matteo Merli <mm...@apache.org>, Venkateswararao Jujjuri (JV) <None>
    
    This closes #402 from sijie/release_notes_4.5.0, closes #356
---
 site/docs/4.5.0/overview/releaseNotes.md | 500 ++++++++++++++++++++++++++++++-
 site/releases.md                         |  10 +-
 2 files changed, 504 insertions(+), 6 deletions(-)

diff --git a/site/docs/4.5.0/overview/releaseNotes.md b/site/docs/4.5.0/overview/releaseNotes.md
index c7845ae..7df07f8 100644
--- a/site/docs/4.5.0/overview/releaseNotes.md
+++ b/site/docs/4.5.0/overview/releaseNotes.md
@@ -2,16 +2,508 @@
 title: Apache BookKeeper 4.5.0 Release Notes
 ---
 
-[provide a summary of this release]
+This is the fifth release of BookKeeper as an Apache Top Level Project!
+
+The 4.5.0 release incorporates hundreds of new fixes, improvements, and features since previous major release, 4.4.0,
+which was released over a year ago. It is a big milestone in Apache BookKeeper community, converging from three
+main branches (Salesforce, Twitter and Yahoo).
 
 Apache BookKeeper users are encouraged to upgrade to 4.5.0. The technical details of this release are summarized
 below.
 
 ## Highlights
 
-[List the highlights]
+The main features in 4.5.0 cover are around following areas:
+
+- Dependencies Upgrade
+- Security
+- Public API
+- Performance
+- Operations
+
+### Dependencies Upgrade
+
+Here is a list of dependencies upgraded in 4.5.0:
+
+- Moved the developement from Java 7 to Java 8.
+- Upgrade Protobuf to `2.6`.
+- Upgrade ZooKeeper from `3.4` to `3.5`.
+- Upgrade Netty to `4.1`.
+- Upgrade Guava to `20.0`.
+- Upgrade SLF4J to `1.7.25`.
+- Upgrade Codahale to `3.1.0`.
+
+### Security
+
+Prior to this release, Apache BookKeeper only supports simple `DIGEST-MD5` type authentication.
+
+With this release of Apache BookKeeper, a number of feature are introduced that can be used, together of separately,
+to secure a BookKeeper cluster.
+
+The following security features are currently supported.
+
+- Authentication of connections to bookies from clients, using either `TLS` or `SASL (Kerberos).
+- Authentication of connections from clients, bookies, autorecovery daemons to `ZooKeeper`, when using zookeeper
+    based ledger managers.
+- Encryption of data transferred between bookies and clients, between bookies and autorecovery daemons using `TLS`.
+
+It's worth noting that those security features are optional - non-secured clusters are supported, as well as a mix
+of authenticated, unauthenticated, encrypted and non-encrypted clients.
+
+For more details, have a look at [BookKeeper Security](../../security/overview).
+
+### Public API
+
+There are multiple new client features introduced in 4.5.0.
+
+#### LedgerHandleAdv
+
+The [Ledger API] is the low level API provides by BookKeeper for interacting with `ledgers` in a bookkeeper cluster.
+It is simple but not flexible on ledger id or entry id generation. Apache BookKeeper introduces `LedgerHandleAdv`
+as an extension of existing `LedgerHandle` for advanced usage. The new `LedgerHandleAdv` allows applications providing
+its own `ledger-id` and assigning `entry-id` on adding entries.
+
+See [Ledger Advanced API](../../api/ledger-adv-api) for more details.
+
+#### Long Poll
+
+`Long Poll` is a main feature that [DistributedLog](https://distributedlog.io) uses to achieve low-latency tailing.
+This big feature has been merged back in 4.5.0 and available to BookKeeper users.
+
+This feature includes two main changes, one is `LastAddConfirmed` piggyback, while the other one is a new `long poll` read API.
+
+The first change piggyback the latest `LastAddConfirm` along with the read response, so your `LastAddConfirmed` will be automatically advanced
+when your read traffic continues. It significantly reduces the traffic to explicitly polling `LastAddConfirmed` and hence reduces the end-to-end latency.
+
+The second change provides a new `long poll` read API, allowing tailing-reads without polling `LastAddConfirmed` everytime after readers exhaust known entries.
+Although `long poll` API brings great latency improvements on tailing reads, it is still a very low-level primitive.
+It is still recommended to use high level API (e.g. [DistributedLog API](../../api/distributedlog-api)) for tailing and streaming use cases.
+
+See [Streaming Reads](https://distributedlog.incubator.apache.org/docs/latest/user_guide/design/main.html#streaming-reads) for more details.
+
+#### Explicit LAC
+
+Prior to 4.5.0, the `LAC` is only advanced when subsequent entries are added. If there is no subsequent entries added,
+the last entry written will not be visible to readers until the ledger is closed. High-level client (e.g. DistributedLog) or applications
+has to work around this by writing some sort of `control records` to advance `LAC`.
+
+In 4.5.0, a new `explicit lac` feature is introduced to periodically advance `LAC` if there are not subsequent entries added. This feature
+can be enabled by setting `explicitLacInterval` to a positive value.
+
+### Performance
+
+There are a lot for performance related bug fixes and improvements in 4.5.0. These changes includes:
+
+- Upgraded netty from 3.x to 4.x to leverage buffer pooling and reduce memory copies.
+- Moved developement from Java 7 to Java 8 to take advantage of Java 8 features.
+- A lot of improvements around scheduling and threading on `bookies`.
+- Delay ensemble change to improve tail latency.
+- Parallel ledger recovery to improve the recovery speed.
+- ...
+
+We outlined following four changes as below. For a complete list of performance improvements, please checkout the `full list of changes` at the end.
+
+#### Netty 4 Upgrade
+
+The major performance improvement introduced in 4.5.0, is upgrading netty from 3.x to [4.x](http://netty.io/wiki/new-and-noteworthy-in-4.0.html).
+
+For more details, please read [upgrade guide](../../admin/upgrade) about the netty related tips when upgrading bookkeeper from 4.4.0 to 4.5.0.
+
+#### Delay Ensemble Change
+
+`Ensemble Change` is a feature that Apache BookKeeper uses to achieve high availability. However it is an expensive metadata operation.
+Especially when Apache BookKeeper is deployed in a multiple data-centers environment, losing a data center will cause churn of metadata
+operations due to ensemble changes. `Delay Ensemble Change` is introduced in 4.5.0 to overcome this problem. Enabling this feature means
+an `Ensemble Change` will only occur when clients can't receive enough valid responses to satisfy `ack-quorum` constraint. This feature
+improves the tail latency.
+
+To enable this feature, please set `delayEnsembleChange` to `true` on your clients.
+
+#### Parallel Ledger Recovery
+
+BookKeeper clients recovers entries one-by-one during ledger recovery. If a ledger has very large volumn of traffic, it will have
+large number of entries to recover when client failures occur. BookKeeper introduces `parallel ledger recovery` in 4.5.0 to allow
+batch recovery to improve ledger recovery speed.
+
+To enable this feature, please set `enableParallelRecoveryRead` to `true` on your clients. You can also set `recoveryReadBatchSize`
+to control the batch size of recovery read.
+
+#### Multiple Journals
+
+Prior to 4.5.0, bookies are only allowed to configure one journal device. If you want to have high write bandwidth, you can raid multiple
+disks into one device and mount that device for jouranl directory. However because there is only one journal thread, this approach doesn't
+actually improve the write bandwidth.
+
+BookKeeper introduces multiple journal directories support in 4.5.0. Users can configure multiple devices for journal directories.
+
+To enable this feature, please use `journalDirectories` rather than `journalDirectory`.
+
+### Operations
+
+#### LongHierarchicalLedgerManager
+
+Apache BookKeeper supports pluggable metadata store. By default, it uses Apache ZooKeeper as its metadata store. Among the zookeeper-based
+ledger manager implementations, `HierarchicalLedgerManager` is the most popular and widely adopted ledger manager. However it has a major
+limitation, which it assumes `ledger-id` is a 32-bits integer. It limits the number of ledgers to `2^32`.
+
+`LongHierarchicalLedgerManager` is introduced to overcome this limitation.
+
+See [Ledger Manager](../../getting-started/concepts/#ledger-manager) for more details.
+
+#### Weight-based placement policy
+
+`Rack-Aware` and `Region-Aware` placement polices are the two available placement policies in BookKeeper client. It places ensembles based
+on users' configured network topology. However they both assume that all nodes are equal. `weight-based` placement is introduced in 4.5.0 to
+improve the existing placement polices. `weight-based` placement was not built as separated polices. It is built in the existing placement policies.
+If you are using `Rack-Aware` or `Region-Aware`, you can simply enable `weight-based` placement by setting `diskWeightBasedPlacementEnabled` to `true`.
+
+#### Customized Ledger Metadata
+
+A `Map<String, byte[]>` is introduced in ledger metadata in 4.5.0. Clients now are allowed to pass in a key/value map when creating ledgers.
+This customized ledger metadata can be later on used by user defined placement policy. This extends the flexibility of bookkeeper API.
+
+#### Add Prometheus stats provider
+
+A new [Prometheus](https://prometheus.io/) [stats provider](https://github.com/apache/bookkeeper/tree/master/bookkeeper-stats-providers/prometheus-metrics-provider)
+is introduce in 4.5.0. It simplies the metric collection when running bookkeeper on [kubernetes](https://kubernetes.io/).
+
+#### Add more tools in BookieShell
+
+`BookieShell` is the tool provided by Apache BooKeeper to operate clusters. There are multiple importants tools introduced in 4.5.0, for example, `decommissionbookie`,
+`expandstorage`, `lostbookierecoverydelay`, `triggeraudit`.
+
+For the complete list of commands in `BookieShell`, please read [BookKeeper CLI tool reference](../../reference/cli).
+
+## Full list of changes
+
+### JIRA
 
-## Details
+#### Sub-task
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-552'>BOOKKEEPER-552</a>] -         64 Bits Ledger ID Generation
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-553'>BOOKKEEPER-553</a>] -         New LedgerManager for 64 Bits Ledger ID Management in ZooKeeper
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-588'>BOOKKEEPER-588</a>] -         SSL support
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-873'>BOOKKEEPER-873</a>] -         Enhance CreatedLedger API to accept ledgerId as input
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-949'>BOOKKEEPER-949</a>] -         Allow entryLog creation even when bookie is in RO mode for compaction
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-965'>BOOKKEEPER-965</a>] -         Long Poll: Changes to the Write Path
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-997'>BOOKKEEPER-997</a>] -         Wire protocol change for supporting long poll
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1017'>BOOKKEEPER-1017</a>] -         Create documentation for ZooKeeper ACLs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1086'>BOOKKEEPER-1086</a>] -         Ledger Recovery - Refactor PendingReadOp
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1087'>BOOKKEEPER-1087</a>] -         Ledger Recovery - Add a parallel reading request in PendingReadOp
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1088'>BOOKKEEPER-1088</a>] -         Ledger Recovery - Add a ReadEntryListener to callback on individual request
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1089'>BOOKKEEPER-1089</a>] -         Ledger Recovery - allow batch reads in ledger recovery
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1092'>BOOKKEEPER-1092</a>] -         Ledger Recovery - Add Test Case for Parallel Ledger Recovery
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1093'>BOOKKEEPER-1093</a>] -         Piggyback LAC on ReadResponse
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1094'>BOOKKEEPER-1094</a>] -         Long Poll - Server and Client Side Changes
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1095'>BOOKKEEPER-1095</a>] -         Long Poll - Client side changes
+</li>
+</ul>
+                            
+#### Bug
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-852'>BOOKKEEPER-852</a>] -         Release LedgerDescriptor and master-key objects when not used anymore
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-903'>BOOKKEEPER-903</a>] -         MetaFormat BookieShell Command is not deleting UnderReplicatedLedgers list from the ZooKeeper
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-907'>BOOKKEEPER-907</a>] -         for ReadLedgerEntriesCmd, EntryFormatter should be configurable and HexDumpEntryFormatter should be one of them
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-908'>BOOKKEEPER-908</a>] -         Case to handle BKLedgerExistException
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-924'>BOOKKEEPER-924</a>] -         addEntry() is susceptible to spurious wakeups
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-927'>BOOKKEEPER-927</a>] -         Extend BOOKKEEPER-886 to LedgerHandleAdv too (BOOKKEEPER-886: Allow to disable ledgers operation throttling)
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-933'>BOOKKEEPER-933</a>] -         ClientConfiguration always inherits System properties
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-938'>BOOKKEEPER-938</a>] -         LedgerOpenOp should use digestType from metadata
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-939'>BOOKKEEPER-939</a>] -         Fix typo in bk-merge-pr.py
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-940'>BOOKKEEPER-940</a>] -         Fix findbugs warnings after bumping to java 8
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-952'>BOOKKEEPER-952</a>] -         Fix RegionAwarePlacementPolicy
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-955'>BOOKKEEPER-955</a>] -         in BookKeeperAdmin listLedgers method currentRange variable is not getting updated to next iterator when it has run out of elements
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-956'>BOOKKEEPER-956</a>] -         HierarchicalLedgerManager doesn&#39;t work for ledgerid of length 9 and 10 because of order issue in HierarchicalLedgerRangeIterator
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-958'>BOOKKEEPER-958</a>] -         ZeroBuffer readOnlyBuffer returns ByteBuffer with 0 remaining bytes for length &gt; 64k
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-959'>BOOKKEEPER-959</a>] -         ClientAuthProvider and BookieAuthProvider Public API used Protobuf Shaded classes
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-976'>BOOKKEEPER-976</a>] -         Fix license headers with &quot;Copyright 2016 The Apache Software Foundation&quot;
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-980'>BOOKKEEPER-980</a>] -         BookKeeper Tools doesn&#39;t process the argument correctly
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-981'>BOOKKEEPER-981</a>] -         NullPointerException in RackawareEnsemblePlacementPolicy while running in Docker Container
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-984'>BOOKKEEPER-984</a>] -          BookieClientTest.testWriteGaps tested
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-986'>BOOKKEEPER-986</a>] -         Handle Memtable flush failure
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-987'>BOOKKEEPER-987</a>] -         BookKeeper build is broken due to the shade plugin for commit ecbb053e6e
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-988'>BOOKKEEPER-988</a>] -         Missing license headers
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-989'>BOOKKEEPER-989</a>] -         Enable travis CI for bookkeeper git
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-999'>BOOKKEEPER-999</a>] -         BookKeeper client can leak threads
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1013'>BOOKKEEPER-1013</a>] -         Fix findbugs errors on latest master
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1018'>BOOKKEEPER-1018</a>] -         Allow client to select older V2 protocol (no protobuf)
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1020'>BOOKKEEPER-1020</a>] -         Fix Explicit LAC tests on master
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1021'>BOOKKEEPER-1021</a>] -         Improve the merge script to handle github reviews api
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1031'>BOOKKEEPER-1031</a>] -         ReplicationWorker.rereplicate fails to call close() on ReadOnlyLedgerHandle
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1044'>BOOKKEEPER-1044</a>] -         Entrylogger is not readding rolled logs back to the logChannelsToFlush list when exception happens while trying to flush rolled logs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1047'>BOOKKEEPER-1047</a>] -         Add missing error code in ZK setData return path
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1058'>BOOKKEEPER-1058</a>] -         Ignore already deleted ledger on replication audit
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1061'>BOOKKEEPER-1061</a>] -         BookieWatcher should not do ZK blocking operations from ZK async callback thread
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1065'>BOOKKEEPER-1065</a>] -         OrderedSafeExecutor should only have 1 thread per bucket
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1071'>BOOKKEEPER-1071</a>] -         BookieRecoveryTest is failing due to a Netty4 IllegalReferenceCountException
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1072'>BOOKKEEPER-1072</a>] -         CompactionTest is flaky when disks are almost full
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1073'>BOOKKEEPER-1073</a>] -         Several stats provider related changes.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1074'>BOOKKEEPER-1074</a>] -         Remove JMX Bean 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1075'>BOOKKEEPER-1075</a>] -         BK LedgerMetadata: more memory-efficient parsing of configs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1076'>BOOKKEEPER-1076</a>] -         BookieShell should be able to read the &#39;FENCE&#39; entry in the log
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1077'>BOOKKEEPER-1077</a>] -         BookKeeper: Local Bookie Journal and ledger paths
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1079'>BOOKKEEPER-1079</a>] -         shell lastMark throws NPE
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1098'>BOOKKEEPER-1098</a>] -         ZkUnderreplicationManager can build up an unbounded number of watchers
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1101'>BOOKKEEPER-1101</a>] -         BookKeeper website menus not working under https
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1102'>BOOKKEEPER-1102</a>] -         org.apache.bookkeeper.client.BookKeeperDiskSpaceWeightedLedgerPlacementTest.testDiskSpaceWeightedBookieSelectionWithBookiesBeingAdded is unreliable
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1103'>BOOKKEEPER-1103</a>] -         LedgerMetadataCreateTest bug in ledger id generation causes intermittent hang
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1104'>BOOKKEEPER-1104</a>] -         BookieInitializationTest.testWithDiskFullAndAbilityToCreateNewIndexFile testcase is unreliable
+</li>
+</ul>
+                            
+#### Improvement
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-612'>BOOKKEEPER-612</a>] -         RegionAwarePlacement Policy
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-748'>BOOKKEEPER-748</a>] -         Move fence requests out of read threads
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-757'>BOOKKEEPER-757</a>] -         Ledger Recovery Improvement
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-759'>BOOKKEEPER-759</a>] -         bookkeeper: delay ensemble change if it doesn&#39;t break ack quorum requirement
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-772'>BOOKKEEPER-772</a>] -         Reorder read sequnce 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-874'>BOOKKEEPER-874</a>] -         Explict LAC from Writer to Bookies
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-881'>BOOKKEEPER-881</a>] -         upgrade surefire plugin to 2.19
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-887'>BOOKKEEPER-887</a>] -         Allow to use multiple bookie journals
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-922'>BOOKKEEPER-922</a>] -         Create a generic (K,V) map to store ledger metadata
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-935'>BOOKKEEPER-935</a>] -         Publish sources and javadocs to Maven Central
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-937'>BOOKKEEPER-937</a>] -         Upgrade protobuf to 2.6
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-944'>BOOKKEEPER-944</a>] -         Multiple issues and improvements to BK Compaction.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-945'>BOOKKEEPER-945</a>] -         Add counters to track the activity of auditor and replication workers
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-946'>BOOKKEEPER-946</a>] -         Provide an option to delay auto recovery of lost bookies
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-961'>BOOKKEEPER-961</a>] -         Assing read/write request for same ledger to a single thread
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-962'>BOOKKEEPER-962</a>] -         Add more journal timing stats
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-963'>BOOKKEEPER-963</a>] -         Allow to use multiple journals in bookie
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-964'>BOOKKEEPER-964</a>] -         Add concurrent maps and sets for primitive types
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-966'>BOOKKEEPER-966</a>] -         change the bookieServer cmdline to make conf-file and option co-exist
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-968'>BOOKKEEPER-968</a>] -         Entry log flushes happen on log rotation and cause long spikes in IO utilization
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-970'>BOOKKEEPER-970</a>] -         Bump zookeeper version to 3.5
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-971'>BOOKKEEPER-971</a>] -         update bk codahale stats provider version
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-998'>BOOKKEEPER-998</a>] -         Increased the max entry size to 5MB
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1001'>BOOKKEEPER-1001</a>] -         Make LocalBookiesRegistry.isLocalBookie() public
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1002'>BOOKKEEPER-1002</a>] -         BookieRecoveryTest can run out of file descriptors
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1003'>BOOKKEEPER-1003</a>] -         Fix TestDiskChecker so it can be used on /dev/shm
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1004'>BOOKKEEPER-1004</a>] -         Allow bookie garbage collection to be triggered manually from tests
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1007'>BOOKKEEPER-1007</a>] -         Explicit LAC: make the interval configurable in milliseconds instead of seconds
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1008'>BOOKKEEPER-1008</a>] -         Move to netty4
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1010'>BOOKKEEPER-1010</a>] -         Bump up Guava version to 20.0
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1022'>BOOKKEEPER-1022</a>] -         Make BookKeeperAdmin implement AutoCloseable
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1039'>BOOKKEEPER-1039</a>] -         bk-merge-pr.py ask to run findbugs and rat before merge
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1046'>BOOKKEEPER-1046</a>] -         Avoid long to Long conversion in OrderedSafeExecutor task submit
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1048'>BOOKKEEPER-1048</a>] -         Use ByteBuf in LedgerStorageInterface
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1050'>BOOKKEEPER-1050</a>] -         Cache journalFormatVersionToWrite when starting Journal
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1051'>BOOKKEEPER-1051</a>] -         Fast shutdown for GarbageCollectorThread
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1052'>BOOKKEEPER-1052</a>] -         Print autorecovery enabled or not in bookie shell
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1053'>BOOKKEEPER-1053</a>] -         Upgrade RAT maven version to 0.12 and ignore Eclipse project files
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1055'>BOOKKEEPER-1055</a>] -         Optimize handling of masterKey in case it is empty
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1056'>BOOKKEEPER-1056</a>] -         Removed PacketHeader serialization/deserialization allocation
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1063'>BOOKKEEPER-1063</a>] -         Use executure.execute() instead of submit() to avoid creation of unused FutureTask
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1066'>BOOKKEEPER-1066</a>] -         Introduce GrowableArrayBlockingQueue
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1068'>BOOKKEEPER-1068</a>] -         Expose ByteBuf in LedgerEntry to avoid data copy
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1069'>BOOKKEEPER-1069</a>] -         If client uses V2 proto, set the connection to always decode V2 messages
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1083'>BOOKKEEPER-1083</a>] -         Improvements on OrderedSafeExecutor
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1084'>BOOKKEEPER-1084</a>] -         Make variables finale if necessary
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1085'>BOOKKEEPER-1085</a>] -         Introduce the AlertStatsLogger
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1090'>BOOKKEEPER-1090</a>] -         Use LOG.isDebugEnabled() to avoid unexpected allocations
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1096'>BOOKKEEPER-1096</a>] -         When ledger is deleted, along with leaf node all the eligible branch nodes also should be deleted in ZooKeeper.
+</li>
+</ul>
+                
+#### New Feature
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-390'>BOOKKEEPER-390</a>] -         Provide support for ZooKeeper authentication
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-391'>BOOKKEEPER-391</a>] -         Support Kerberos authentication of bookkeeper
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-575'>BOOKKEEPER-575</a>] -         Bookie SSL support
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-670'>BOOKKEEPER-670</a>] -         Longpoll Read &amp; Piggyback Support
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-912'>BOOKKEEPER-912</a>] -         Allow EnsemblePlacementPolicy to choose bookies using ledger custom data (multitenancy support)
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-928'>BOOKKEEPER-928</a>] -         Add custom client supplied metadata field to LedgerMetadata
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-930'>BOOKKEEPER-930</a>] -         Option to disable Bookie networking
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-941'>BOOKKEEPER-941</a>] -         Introduce Feature Switches For controlling client and server behavior
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-948'>BOOKKEEPER-948</a>] -         Provide an option to add more ledger/index directories to a bookie
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-950'>BOOKKEEPER-950</a>] -         Ledger placement policy to accomodate different storage capacity of bookies
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-969'>BOOKKEEPER-969</a>] -         Security Support
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-983'>BOOKKEEPER-983</a>] -         BookieShell Command for LedgerDelete
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-991'>BOOKKEEPER-991</a>] -         bk shell - Get a list of all on disk files
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-992'>BOOKKEEPER-992</a>] -         ReadLog Command Enhancement
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1019'>BOOKKEEPER-1019</a>] -         Support for reading entries after LAC (causal consistency driven by out-of-band communications)
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1034'>BOOKKEEPER-1034</a>] -         When all disks are full, start Bookie in RO mode if RO mode is enabled 
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1067'>BOOKKEEPER-1067</a>] -         Add Prometheus stats provider
+</li>
+</ul>
+                                            
+#### Story
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-932'>BOOKKEEPER-932</a>] -         Move to JDK 8
+</li>
+</ul>
+                
+#### Task
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-931'>BOOKKEEPER-931</a>] -         Update the committers list on website
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-996'>BOOKKEEPER-996</a>] -         Apache Rat Check Failures
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1012'>BOOKKEEPER-1012</a>] -         Shade and relocate Guava
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1027'>BOOKKEEPER-1027</a>] -         Cleanup main README and main website page
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1038'>BOOKKEEPER-1038</a>] -         Fix findbugs warnings and upgrade to 3.0.4
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1043'>BOOKKEEPER-1043</a>] -         Upgrade Apache Parent Pom Reference to latest version
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1054'>BOOKKEEPER-1054</a>] -         Add gitignore file
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1059'>BOOKKEEPER-1059</a>] -         Upgrade to SLF4J-1.7.25
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1060'>BOOKKEEPER-1060</a>] -         Add utility to use SafeRunnable from Java8 Lambda
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1070'>BOOKKEEPER-1070</a>] -         bk-merge-pr.py use apache-rat:check goal instead of rat:rat
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1091'>BOOKKEEPER-1091</a>] -         Remove Hedwig from BookKeeper website page
+</li>
+</ul>
+            
+#### Test
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-967'>BOOKKEEPER-967</a>] -         Create new testsuite for testing RackAwareEnsemblePlacementPolicy using ScriptBasedMapping.
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1045'>BOOKKEEPER-1045</a>] -         Execute tests in different JVM processes
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1064'>BOOKKEEPER-1064</a>] -         ConcurrentModificationException in AuditorLedgerCheckerTest
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1078'>BOOKKEEPER-1078</a>] -         Local BookKeeper enhancements for testability
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-1097'>BOOKKEEPER-1097</a>] -         GC test when no WritableDirs
+</li>
+</ul>
+        
+#### Wish
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/BOOKKEEPER-943'>BOOKKEEPER-943</a>] -         Reduce log level of AbstractZkLedgerManager for register/unregister ReadOnlyLedgerHandle
+</li>
+</ul>
 
-[list to issues list]
+### Github
 
+- [https://github.com/apache/bookkeeper/milestone/1](https://github.com/apache/bookkeeper/milestone/1)
diff --git a/site/releases.md b/site/releases.md
index 7680f19..9ad9eb0 100644
--- a/site/releases.md
+++ b/site/releases.md
@@ -27,9 +27,15 @@ Client Guide | API docs
 
 ## News
 
-### [date] Release 4.5.0 available
+### 10 August, 2017: Release 4.5.0 available
 
-[INSERT SUMMARY]
+This is the fifth release of BookKeeper as an Apache Top Level Project!
+
+The 4.5.0 release incorporates hundreds of new fixes, improvements, and features since previous major release, 4.4.0,
+which was released over a year ago. It is a big milestone in Apache BookKeeper community, converging from three
+main branches (Salesforce, Twitter and Yahoo).
+
+See [BookKeeper 4.5.0 Release Notes](../docs/4.5.0/overview/releaseNotes) for details.
 
 ### 16 May, 2016: release 4.4.0 available
 

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 03/10: Fix zkCli issue

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 6a554062bdbe6cc8d4e67205c5d8bdcec80fdfae
Author: zhaijack <zh...@gmail.com>
AuthorDate: Mon Aug 7 18:43:55 2017 +0800

    Fix zkCli issue
    
    Descriptions of the changes in this PR:
    /opt/zk/bin/zkCli.sh was leaked to replace by /opt/bookkeeper/bin/bookkeeper org.apache.zookeeper.ZooKeeperMain and cause:
    
    /opt/bookkeeper/entrypoint.sh: line 61: /opt/zk/bin/zkCli.sh: No such file or directory
    
    Author: zhaijack <zh...@gmail.com>
    
    Reviewers: Sijie Guo <None>
    
    This closes #404 from zhaijack/docker_fix
---
 docker/scripts/entrypoint.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docker/scripts/entrypoint.sh b/docker/scripts/entrypoint.sh
index 7610361..310970a 100755
--- a/docker/scripts/entrypoint.sh
+++ b/docker/scripts/entrypoint.sh
@@ -59,7 +59,7 @@ echo "wait for zookeeper"
 until /opt/bookkeeper/bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server ${BK_zkServers} ls /; do sleep 5; done
 
 echo "create the zk root dir for bookkeeper"
-/opt/zk/bin/zkCli.sh -server ${BK_zkServers} create ${BK_CLUSTER_ROOT_PATH}
+/opt/bookkeeper/bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server ${BK_zkServers} create ${BK_CLUSTER_ROOT_PATH}
 
 echo "format zk metadata"
 echo "please ignore the failure, if it has already been formatted, "

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 04/10: Flip apache baseurl from `/test/content` to `/`

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 77d2bdf600abca51b976671e9d066efcd45615b8
Author: Sijie Guo <si...@apache.org>
AuthorDate: Mon Aug 7 13:52:24 2017 -0700

    Flip apache baseurl from `/test/content` to `/`
    
    Descriptions of the changes in this PR:
    
    INFRA is cutting the bookkeeper website from CMS to git. we need to flip '/test/content' to '/'.
    
    Author: Sijie Guo <si...@apache.org>
    
    Reviewers: Matteo Merli <None>
    
    This closes #411 from sijie/sijie/move_baseurl_to_root
---
 site/_config.apache.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/site/_config.apache.yml b/site/_config.apache.yml
index 7708fd0..b146843 100644
--- a/site/_config.apache.yml
+++ b/site/_config.apache.yml
@@ -1,2 +1,2 @@
-baseurl: /test/content/
+baseurl: /
 destination: generated_site/content

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 05/10: Fix typo: TSL should be TLS in server_conf

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit d14a397b2d37c751f6506bc14a027a9e1e0cc2a0
Author: zhaijack <zh...@gmail.com>
AuthorDate: Wed Aug 9 11:42:09 2017 +0800

    Fix typo: TSL should be TLS in server_conf
    
    TSL should be TLS in server_conf
    
    Author: zhaijack <zh...@gmail.com>
    
    Reviewers: Sijie Guo <None>
    
    This closes #421 from zhaijack/fix_conf_typo
---
 bookkeeper-server/conf/bk_server.conf | 22 +++++++++++-----------
 site/_data/config/bk_server.yaml      | 22 +++++++++++-----------
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/bookkeeper-server/conf/bk_server.conf b/bookkeeper-server/conf/bk_server.conf
index e2fbce1..d3667b2 100755
--- a/bookkeeper-server/conf/bk_server.conf
+++ b/bookkeeper-server/conf/bk_server.conf
@@ -154,35 +154,35 @@ journalDirectory=/tmp/bk-txn
 # isForceGCAllowWhenNoSpace=false
 
 #############################################################################
-## TSL settings
+## TLS settings
 #############################################################################
 
-# TSL Provider (JDK or OpenSSL).
-# tslProvider=OpenSSL
+# TLS Provider (JDK or OpenSSL).
+# tlsProvider=OpenSSL
 
 # The path to the class that provides security.
-# tslProviderFactoryClass=org.apache.bookkeeper.security.SSLContextFactory
+# tlsProviderFactoryClass=org.apache.bookkeeper.security.SSLContextFactory
 
 # Type of security used by server.
-# tslClientAuthentication=true
+# tlsClientAuthentication=true
 
 # Bookie Keystore type.
-# tslKeyStoreType=JKS
+# tlsKeyStoreType=JKS
 
 # Bookie Keystore location (path).
-# tslKeyStore=null
+# tlsKeyStore=null
 
 # Bookie Keystore password path, if the keystore is protected by a password.
-# tslKeyStorePasswordPath=null
+# tlsKeyStorePasswordPath=null
 
 # Bookie Truststore type.
-# tslTrustStoreType=null
+# tlsTrustStoreType=null
 
 # Bookie Truststore location (path).
-# tslTrustStore=null
+# tlsTrustStore=null
 
 # Bookie Truststore password path, if the trust store is protected by a password.
-# tslTrustStorePasswordPath=null
+# tlsTrustStorePasswordPath=null
 
 #############################################################################
 ## Long poll request parameter settings
diff --git a/site/_data/config/bk_server.yaml b/site/_data/config/bk_server.yaml
index 9de5bac..af74b5a 100644
--- a/site/_data/config/bk_server.yaml
+++ b/site/_data/config/bk_server.yaml
@@ -88,36 +88,36 @@ groups:
     description: Whether force compaction is allowed when the disk is full or almost full. Forcing GC may get some space back, but may also fill up disk space more quickly. This is because new log files are created before GC, while old garbage log files are deleted after GC.
     default: 'false'
 
-- name: TSL settings
+- name: TLS settings
   params:
   - param: tslProvider
-    description: TSL Provider (JDK or OpenSSL)
+    description: TLS Provider (JDK or OpenSSL)
     default: OpenSSL
-  - param: tslProviderFactoryClass
+  - param: tlsProviderFactoryClass
     description: The path to the class that provides security.
     default: org.apache.bookkeeper.security.SSLContextFactory
-  - param: tslClientAuthentication
+  - param: tlsClientAuthentication
     description: Type of security used by server.
     default: 'true'
-  - param: tslKeyStoreType
+  - param: tlsKeyStoreType
     description: Bookie Keystore type.
     default: JKS
-  - param: tslKeyStore
+  - param: tlsKeyStore
     description: Bookie Keystore location (path).
     default: null
-  - param: tslKeyStore
+  - param: tlsKeyStore
     description: Bookie Keystore location (path).
     default: null
-  - param: tslKeyStorePasswordPath
+  - param: tlsKeyStorePasswordPath
     description: Bookie Keystore password path, if the keystore is protected by a password.
     default: null
-  - param: tslTrustStoreType
+  - param: tlsTrustStoreType
     description: Bookie Truststore type.
     default: null
-  - param: tslTrustStore
+  - param: tlsTrustStore
     description: Bookie Truststore location (path).
     default: null
-  - param: tslTrustStorePasswordPath
+  - param: tlsTrustStorePasswordPath
     description: Bookie Truststore password path, if the truststore is protected by a password.
     default: null
 

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 02/10: ISSUE #338: add first draft Docker image including community suggestions

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 6f6a32f8e9820e12bb4deab7b18e8982abf53db8
Author: zhaijack <zh...@gmail.com>
AuthorDate: Sun Aug 6 21:08:11 2017 +0800

    ISSUE #338: add first draft Docker image including community suggestions
    
    This is the first part of #335. And it is based on #197
    Main changes:
     327: Docker image: Drop versions and Alpine support.
     260: Docker image: provide a way to pass any desired configuration property via ENV vars.
    
    ---
    Be sure to do all of the following to help us incorporate your contribution
    quickly and easily:
    
    - [X] Make sure the PR title is formatted like:
        `<Issue # or BOOKKEEPER-#>: Description of pull request`
        `e.g. Issue 123: Description ...`
        `e.g. BOOKKEEPER-1234: Description ...`
    - [ ] Make sure tests pass via `mvn clean apache-rat:check install findbugs:check`.
    - [X] Replace `<Issue # or BOOKKEEPER-#>` in the title with the actual Issue/JIRA number.
    
    ---
    
    Author: zhaijack <zh...@gmail.com>
    
    Reviewers: Matteo Merli <None>, Sijie Guo <None>
    
    This closes #342 from zhaijack/issue_338, closes #338
---
 docker/Dockerfile                       |  58 ++++++++++
 docker/Makefile                         | 194 ++++++++++++++++++++++++++++++++
 docker/README.md                        | 174 ++++++++++++++++++++++++++++
 docker/scripts/apply-config-from-env.py |  85 ++++++++++++++
 docker/scripts/entrypoint.sh            |  72 ++++++++++++
 docker/scripts/healthcheck.sh           |  28 +++++
 6 files changed, 611 insertions(+)

diff --git a/docker/Dockerfile b/docker/Dockerfile
new file mode 100644
index 0000000..45d422e
--- /dev/null
+++ b/docker/Dockerfile
@@ -0,0 +1,58 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+FROM centos:7
+MAINTAINER Apache BookKeeper <de...@bookkeeper.apache.org>
+
+ARG BK_VERSION=4.4.0
+ARG DISTRO_NAME=bookkeeper-server-${BK_VERSION}-bin
+ARG GPG_KEY=B3D56514
+
+ENV BOOKIE_PORT=3181
+EXPOSE $BOOKIE_PORT
+ENV BK_USER=bookkeeper
+
+# Download Apache Bookkeeper, untar and clean up
+RUN set -x \
+    && adduser "${BK_USER}" \
+    && yum install -y java-1.8.0-openjdk-headless wget bash python md5sum sha1sum \
+    && mkdir -pv /opt \
+    && cd /opt \
+    && wget -q "https://archive.apache.org/dist/bookkeeper/bookkeeper-${BK_VERSION}/${DISTRO_NAME}.tar.gz" \
+    && wget -q "https://archive.apache.org/dist/bookkeeper/bookkeeper-${BK_VERSION}/${DISTRO_NAME}.tar.gz.asc" \
+    && wget -q "https://archive.apache.org/dist/bookkeeper/bookkeeper-${BK_VERSION}/${DISTRO_NAME}.tar.gz.md5" \
+    && wget -q "https://archive.apache.org/dist/bookkeeper/bookkeeper-${BK_VERSION}/${DISTRO_NAME}.tar.gz.sha1" \
+    && md5sum -c ${DISTRO_NAME}.tar.gz.md5 \
+    && sha1sum -c ${DISTRO_NAME}.tar.gz.sha1 \
+    && gpg --keyserver ha.pool.sks-keyservers.net --recv-key "$GPG_KEY" \
+    && gpg --batch --verify "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz" \
+    && tar -xzf "$DISTRO_NAME.tar.gz" \
+    && mv bookkeeper-server-${BK_VERSION}/ /opt/bookkeeper/ \
+    && rm -rf "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz.md5" "$DISTRO_NAME.tar.gz.sha1" \
+    && yum remove -y wget \
+    && yum clean all
+
+WORKDIR /opt/bookkeeper
+
+COPY scripts/apply-config-from-env.py scripts/entrypoint.sh scripts/healthcheck.sh /opt/bookkeeper/
+
+ENTRYPOINT [ "/bin/bash", "/opt/bookkeeper/entrypoint.sh" ]
+CMD ["/opt/bookkeeper/bin/bookkeeper", "bookie"]
+
+HEALTHCHECK --interval=10s --timeout=60s CMD /bin/bash /opt/bookkeeper/healthcheck.sh
diff --git a/docker/Makefile b/docker/Makefile
new file mode 100644
index 0000000..27c8ce6
--- /dev/null
+++ b/docker/Makefile
@@ -0,0 +1,194 @@
+#!/bin/bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+
+VERSION ?= centos
+IMAGE ?= bookkeeper/bookie:$(VERSION)
+BOOKIE ?= 1
+DOCKER_NETWORK ?= bk_network
+
+BUILD_DIR ?= $(VERSION)
+
+CONTAINER_NAME = bookkeeper-$(BOOKIE)
+DOCKER_HOSTNAME = $(shell hostname)
+BK_LOCAL_DATA_DIR = /tmp/test_bk
+BK_LOCAL_CONTAINER_DATA_DIR = $(BK_LOCAL_DATA_DIR)/$(CONTAINER_NAME)
+BK_DIR = /data
+BK_zkLedgersRootPath = /ledgers
+
+ZK_CONTAINER_NAME=test_zookeeper
+ZK_LOCAL_DATA_DIR=$(BK_LOCAL_DATA_DIR)/zookkeeper
+
+
+CONTAINER_IP=$(shell docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(CONTAINER_NAME))
+
+# -------------------------------- #
+
+.PHONY: all build run create start stop shell exec root-shell root-exec info ip clean-files clean
+
+# -------------------------------- #
+
+all:
+	make info
+
+# -------------------------------- #
+
+# Build the bookkeeper image.
+#   make build
+build:
+	cd $(BUILD_DIR) ; \
+	time docker build \
+	    -t $(IMAGE) .
+
+# -------------------------------- #
+
+# Create and run a bookkeeper container with data persisted on local filesystem. It needs the zookkeeper container.
+# In order to launch several bookies, the command need the bookie number
+#   make run-bk BOOKIE=4
+
+run-bk:
+	mkdir -p $(BK_LOCAL_DATA_DIR) \
+			$(BK_LOCAL_CONTAINER_DATA_DIR) \
+			$(BK_LOCAL_CONTAINER_DATA_DIR)/journal \
+			$(BK_LOCAL_CONTAINER_DATA_DIR)/ledger \
+			$(BK_LOCAL_CONTAINER_DATA_DIR)/index
+	
+	-docker rm -f $(CONTAINER_NAME)
+	docker run -it\
+		--network $(DOCKER_NETWORK) \
+	    --volume $(BK_LOCAL_CONTAINER_DATA_DIR)/journal:$(BK_DIR)/journal \
+	    --volume $(BK_LOCAL_CONTAINER_DATA_DIR)/ledger:$(BK_DIR)/ledger \
+	    --volume $(BK_LOCAL_CONTAINER_DATA_DIR)/index:$(BK_DIR)/index \
+	    --name "$(CONTAINER_NAME)" \
+	    --hostname "$(CONTAINER_NAME)" \
+	    --env BK_zkServers=$(ZK_CONTAINER_NAME):2181 \
+	    --env BK_zkLedgersRootPath=$(BK_zkLedgersRootPath) \
+	    $(IMAGE)
+
+# -------------------------------- #
+
+# Create run and destroy a container that will format zookkeeper metadata
+#   make run-format
+
+run-format:
+	docker run -it --rm \
+		--network $(DOCKER_NETWORK) \
+		--env BK_zkServers=$(ZK_CONTAINER_NAME):2181 \
+		$(IMAGE) \
+		bookkeeper shell metaformat $(FORMAT_OPTS)
+
+# -------------------------------- #
+
+# Create and run the zookkeeper container needed by the ensemble
+#   make run-zk
+
+run-zk:
+	-docker network create $(DOCKER_NETWORK)
+	mkdir -pv $(BK_LOCAL_DATA_DIR) $(ZK_LOCAL_DATA_DIR) $(ZK_LOCAL_DATA_DIR)/data $(ZK_LOCAL_DATA_DIR)/datalog
+	-docker rm -f $(ZK_CONTAINER_NAME)
+	docker run -it --rm \
+		--network $(DOCKER_NETWORK) \
+		--name "$(ZK_CONTAINER_NAME)" \
+		--hostname "$(ZK_CONTAINER_NAME)" \
+		-v $(ZK_LOCAL_DATA_DIR)/data:/data \
+		-v $(ZK_LOCAL_DATA_DIR)/datalog:/datalog \
+		-p 2181:2181 \
+		zookeeper
+
+# -------------------------------- #
+
+# Create and run a container running the bookkeeper tutorial application (a simple dice rolling application).
+# It's possible to run several dice applications in order to simulate a real life concurrent scenario.
+#   make run-dice
+run-dice:
+	docker run -it --rm \
+		--network $(DOCKER_NETWORK) \
+		--env ZOOKEEPER_SERVERS=$(ZK_CONTAINER_NAME):2181 \
+		caiok/bookkeeper-tutorial
+
+# -------------------------------- #
+
+# This is an example of a full bookkeeper ensemble of 3 bookies, a zookkeeper server and 2 client dice applications.
+# On MacOS please run these command manually in several terminals
+#   make run-demo
+run-demo:
+	$(eval WAIT_CMD := read -p 'Press Enter to close...')
+	$(TERMINAL_EMULATOR) -e "bash -l -c \"make run-zk ; $(WAIT_CMD)"\"
+	sleep 3
+	$(TERMINAL_EMULATOR) -e "bash -l -c \"make run-bk BOOKIE=1 TRY_METAFORMAT=true; $(WAIT_CMD)\""
+	$(TERMINAL_EMULATOR) -e "bash -l -c \"make run-bk BOOKIE=2 TRY_METAFORMAT=true; $(WAIT_CMD)\""
+	$(TERMINAL_EMULATOR) -e "bash -l -c \"make run-bk BOOKIE=3 TRY_METAFORMAT=true; $(WAIT_CMD)\""
+	sleep 6
+	$(TERMINAL_EMULATOR) -e "bash -l -c \"make run-dice ; $(WAIT_CMD)\""
+	sleep 2
+	$(TERMINAL_EMULATOR) -e "bash -l -c \"make run-dice ; $(WAIT_CMD)\""
+
+	@echo
+	@echo "If you want to restart from scratch the application, remove all its data:"
+	@echo "  sudo rm -rf $(BK_LOCAL_DATA_DIR)"
+	@echo
+
+# -------------------------------- #
+# Other undocumented utilities     #
+# -------------------------------- #
+
+start:
+	docker start "$(CONTAINER_NAME)"
+
+# -------------------------------- #
+
+stop:
+	docker stop "$(CONTAINER_NAME)"
+
+# -------------------------------- #
+
+shell exec:
+	docker exec -it \
+	    "$(CONTAINER_NAME)" \
+	    /bin/bash -il
+
+# -------------------------------- #
+
+root-shell root-exec:
+	docker exec -it "$(CONTAINER_NAME)" /bin/bash -il
+
+# -------------------------------- #
+
+info ip:
+	@echo 
+	@echo "Image: $(IMAGE)"
+	@echo "Container name: $(CONTAINER_NAME)"
+	@echo
+	-@echo "Actual Image: $(shell docker inspect --format '{{ .RepoTags }} (created {{.Created }})' $(IMAGE))"
+	-@echo "Actual Container: $(shell docker inspect --format '{{ .Name }} (created {{.Created }})' $(CONTAINER_NAME))"
+	-@echo "Actual Container IP: $(shell docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(CONTAINER_NAME))"
+	@echo
+
+# -------------------------------- #
+
+clean-files:
+
+clean:
+	-docker stop $(CONTAINER_NAME)
+	-docker rm $(CONTAINER_NAME)
+	-docker rmi $(IMAGE)
+	make clean-files
diff --git a/docker/README.md b/docker/README.md
new file mode 100644
index 0000000..7b7cb36
--- /dev/null
+++ b/docker/README.md
@@ -0,0 +1,174 @@
+
+# What is Apache Bookkeeper?
+
+Apache ZooKeeper is a software project of the Apache Software Foundation, providing a replicated log service which can be used to build replicated state machines. A log contains a sequence of events which can be applied to a state machine. BookKeeper guarantees that each replica state machine will see all the same entries, in the same order.
+
+> [Apache Bookkeeper](http://bookkeeper.apache.org/)
+
+
+# How to use this image
+
+Bookkeeper needs [Zookeeper](https://zookeeper.apache.org/) in order to preserve its state and publish its bookies (bookkepeer servers). The client only need to connect to a Zookkeeper server in the ensamble in order to obtain the list of Bookkeeper servers.
+
+## TL;DR
+
+If you just want to see things working, you can play with Makefile hosted in this project and check its targets for a fairly complex set up example:
+```
+git clone https://github.com/apache/bookkeeper
+cd bookkeeper/docker
+make run-demo
+```
+While, if you don't have access to a X environment, e.g. on default MacOS, It has to run the last command manually in 6 terminals respectively.
+```
+make run-zk
+make run-bk BOOKIE=1
+make run-bk BOOKIE=2
+make run-bk BOOKIE=3
+make run-dice
+make run-dice
+```
+This will do all the following steps and start up a working ensemble with two dice applications.
+
+## Step by step
+
+The simplest way to let Bookkeeper servers publish themselves with a name, which could be resolved consistently across container runs, is through creation of a [docker network](https://docs.docker.com/engine/reference/commandline/network_create/):
+```
+docker network create "my-bookkeeper-network"
+```
+Then we can start a Zookeeper (from [Zookeeper official image](https://hub.docker.com/_/zookeeper/)) server in standalone mode on that network:
+```
+docker run -d \
+    --network "my-bookkeeper-network" \
+    --name "my-zookeeper" \
+    --hostname "my-zookeeper" \
+    zookeeper
+```
+And initialize the metadata store that bookies will use to store information:
+```
+docker run -it --rm \
+    --network "my-bookkeeper-network" \
+    --env ZK_URL=my-zookeeper:2181 \
+    bookkeeper \
+    bookkeeper shell metaformat
+```
+Now we can start our Bookkeeper ensemble (e.g. with three bookies):
+```
+docker run -it\
+    --network "my-bookkeeper-network" \
+    --env ZK_URL=my-zookeeper:2181 \
+    --name "bookie1" \
+    --hostname "bookie1" \
+    bookkeeper
+```
+And so on for "bookie2" and "bookie3". We have now our fully functional ensemble, ready to accept clients.
+
+In order to play with our freshly created ensemble, you can use the simple application taken from [Bookkeeper Tutorial](http://bookkeeper.apache.org/docs/master/bookkeeperTutorial.html) and packaged in a [docker image](https://github.com/caiok/bookkeeper-tutorial) for convenience.
+
+This application check if it can be leader, if yes start to roll a dice and book this rolls on bookkeeper, otherwise it will start to follow the leader rolls. If leader stops, follower will try to become leader and so on.
+
+Start a dice application (you can run it several times to view the behavior in a concurrent environment):
+```
+docker run -it --rm \
+    --network "my-bookkeeper-network" \
+    --env ZK_URL=my-zookkeeper:2181 \
+    caiok/bookkeeper-tutorial
+```
+
+## Configuration
+
+Bookkeeper configuration is located in `/opt/bookkeeper/conf` in the docker container, it is a copy of [these files](https://github.com/apache/bookkeeper/tree/master/bookkeeper-server/conf) in bookkeeper repo.
+
+There are 2 ways to set bookkeeper configuration:
+
+1, Apply setted (e.g. docker -e kk=vv) environment variables into configuration files. Environment variable names is in format "BK_originalName", in which "originalName" is the key in config files.
+
+2, If you are able to handle your local volumes, use `docker --volume` command to bind-mount your local configure volumes to `/opt/bookkeeper/conf`.
+
+Example showing how to use your own configuration files:
+```
+$ docker run --name bookie1 -d \
+    -v $(local_configure_dir):/opt/bookkeeper/conf/ \   < == use 2nd approach, mount dir contains config_files
+    -e BK_bookiePort=3181 \                             < == use 1st approach, set bookiePort
+    -e BK_zkServers=zk-server1:2181,zk-server2:2181 \   < == use 1st approach, set zookeeper servers
+    -e BK_journalPreAllocSizeMB=32 \                    < == use 1st approach, set journalPreAllocSizeMB in [bk_server.conf](https://github.com/apache/bookkeeper/blob/master/bookkeeper-server/conf/bk_server.conf)
+    bookkeeper
+```
+
+### Override rules for bookkeeper configuration
+If you have applied several ways to set the same config target, e.g. the environment variable names contained in [these files](https://github.com/apache/bookkeeper/tree/master/bookkeeper-server/conf) and conf_file in /opt/bookkeeper/conf/.
+
+Then the override rules is as this:
+
+Environment variable names contained in [these files](https://github.com/apache/bookkeeper/tree/master/bookkeeper-server/conf), e.g. `zkServers`
+
+    Override
+
+Values in /opt/bookkeeper/conf/conf_files.
+
+Take above example, if in docker instance you have bind-mount your config file as /opt/bookkeeper/conf/bk_server.conf, and in it contains key-value pair: `zkServers=zk-server3:2181`, then the value that take effect finally is `zkServers=zk-server1:2181,zk-server2:2181`
+
+Because
+
+`-e BK_zkServers=zk-server1:2181,zk-server2:2181` will override key-value pair: `zkServers=zk-server3:2181`, which contained in /opt/bookkeeper/conf/bk_server.conf.
+
+
+### Environment variable names that mostly used for your configuration.
+
+#### `BK_bookiePort`
+
+This variable allows you to specify the port on which Bookkeeper should listen for incoming connections.
+
+This will override `bookiePort` in [bk_server.conf](https://github.com/apache/bookkeeper/blob/master/bookkeeper-server/conf/bk_server.conf).
+
+Default value is "3181".
+
+#### `BK_zkServers`
+
+This variable allows you to specify a list of machines of the Zookeeper ensemble. Each entry has the form of `host:port`. Entries are separated with a comma.
+
+This will override `zkServers` in [bk_server.conf](https://github.com/apache/bookkeeper/blob/master/bookkeeper-server/conf/bk_server.conf).
+
+Default value is "127.0.0.1:2181"
+
+#### `BK_zkLedgersRootPath`
+
+This variable allows you to specify the root directory bookkeeper will use on Zookeeper to store ledgers metadata.
+
+This will override `zkLedgersRootPath ` in [bk_server.conf](https://github.com/apache/bookkeeper/blob/master/bookkeeper-server/conf/bk_server.conf).
+
+Default value is "/bookkeeper/ledgers"
+
+#### `BK_CLUSTER_ROOT_PATH`
+
+This variable allows you to specify the root directory bookkeeper will use on Zookeeper.
+
+Default value is empty - " ". so ledgers dir in zookeeper will be at "/ledgers" by default. You could set it as that you want, e.g. "/bookkeeper"
+
+#### `BK_DATA_DIR`
+This variable allows you to specify where to store data in docker instance.
+
+This could be override by env vars "BK_journalDirectory", "BK_ledgerDirectories", "BK_indexDirectories"  and also `journalDirectory`, `ledgerDirectories`, `indexDirectories` in [bk_server.conf](https://github.com/apache/bookkeeper/blob/master/bookkeeper-server/conf/bk_server.conf).
+
+Default value is "/data/bookkeeper", which contains volumes `/data/bookkeeper/journal`, `/data/bookkeeper/ledger` and `/data/bookkeeper/index` to hold Bookkeeper data in docker.
+
+
+### Configure files under /opt/bookkeeper/conf
+These files is originally un-tared from the bookkeeper building binary, such as [bookkeeper-server-4.4.0-bin.tar.tgz](https://archive.apache.org/dist/bookkeeper/bookkeeper-4.4.0/bookkeeper-4.4.0-src.tar.gz), and it comes from [these files](https://github.com/apache/bookkeeper/tree/master/bookkeeper-server/conf) in bookkeeper repo.
+
+Usually we could config files bk_server.conf, bkenv.sh, log4j.properties, and log4j.shell.properties. Please read and understand them before you do the configuration.
+
+
+### Caveats
+
+Be careful where you put the transaction log (journal). A dedicated transaction log device is key to consistent good performance. Putting the log on a busy device will adversely effect performance.
+
+Here is some useful and graceful command the could be used to replace the default command, once you want to delete the cookeis and do auto recovery:
+```
+/bookkeeper/bookkeeper-server/bin/bookkeeper shell bookieformat -nonInteractive -force -deleteCookie
+/bookkeeper/bookkeeper-server/bin/bookkeeper autorecovery
+```
+Use them, and replace the default [CMD] when you wanted to do things other than start a bookie.
+
+# License
+
+View [license information](https://github.com/apache/bookkeeper/blob/master/LICENSE) for the software contained in this image.
diff --git a/docker/scripts/apply-config-from-env.py b/docker/scripts/apply-config-from-env.py
new file mode 100755
index 0000000..78e6945
--- /dev/null
+++ b/docker/scripts/apply-config-from-env.py
@@ -0,0 +1,85 @@
+#!/usr/bin/env python
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+##
+## Edit properties config files under config_dir and replace values
+## based on the ENV variables
+## export my-key=new-value
+##
+## ./apply-config-from-env config_dir
+##
+
+import os, sys
+
+if len(sys.argv) != 2:
+    print 'Usage: %s ' + 'config_dir' % (sys.argv[0])
+    sys.exit(1)
+
+def mylistdir(dir):
+    return [os.path.join(dir, filename) for filename in os.listdir(dir)]
+
+# Always apply env config to all the files under conf
+conf_dir = sys.argv[1]
+conf_files = mylistdir(conf_dir)
+print 'conf files: '
+print conf_files
+
+bk_env_prefix = 'BK_'
+
+for conf_filename in conf_files:
+    lines = []  # List of config file lines
+    keys = {}   # Map a key to its line number in the file
+
+    # Load conf file
+    for line in open(conf_filename):
+        lines.append(line)
+        line = line.strip()
+        #if not line or line.startswith('#'):
+        if not line or '=' not in line:
+            continue
+
+        if line.startswith('#'):
+            line = line.replace('#', '')
+
+        # Remove spaces around key,
+        line = line.replace(' ', '')
+        k,v = line.split('=', 1)
+
+        # Only replace first appearance
+        if k not in keys:
+            keys[k] = len(lines) - 1
+        else:
+           lines.pop()
+
+    # Update values from Env
+    for k in sorted(os.environ.keys()):
+        v = os.environ[k]
+        if k.startswith(bk_env_prefix):
+            search_key = k[len(bk_env_prefix):]
+            if search_key in keys:
+                print '[%s] Applying config %s = %s' % (conf_filename, search_key, v)
+                idx = keys[search_key]
+                lines[idx] = '%s=%s\n' % (search_key, v)
+
+    # Store back the updated config in the same file
+    f = open(conf_filename, 'w')
+    for line in lines:
+        f.write(line)
+    f.close()
diff --git a/docker/scripts/entrypoint.sh b/docker/scripts/entrypoint.sh
new file mode 100755
index 0000000..7610361
--- /dev/null
+++ b/docker/scripts/entrypoint.sh
@@ -0,0 +1,72 @@
+#!/bin/bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+export PATH=$PATH:/opt/bookkeeper/bin
+export JAVA_HOME=/usr
+
+# env var used often
+PORT0=${PORT0:-${BOOKIE_PORT}}
+PORT0=${PORT0:-3181}
+BK_DATA_DIR=${BK_DATA_DIR:-"/data/bookkeeper"}
+BK_CLUSTER_ROOT_PATH=${BK_CLUSTER_ROOT_PATH:-" "}
+
+# env vars to replace values in config files
+export BK_bookiePort=${BK_bookiePort:-${PORT0}}
+export BK_zkServers=${BK_zkServers}
+export BK_zkLedgersRootPath=${BK_zkLedgersRootPath:-"${BK_CLUSTER_ROOT_PATH}/ledgers"}
+export BK_journalDirectory=${BK_journalDirectory:-${BK_DATA_DIR}/journal}
+export BK_ledgerDirectories=${BK_ledgerDirectories:-${BK_DATA_DIR}/ledgers}
+export BK_indexDirectories=${BK_indexDirectories:-${BK_DATA_DIR}/index}
+
+echo "BK_bookiePort bookie service port is $BK_bookiePort"
+echo "BK_zkServers is $BK_zkServers"
+echo "BK_DATA_DIR is $BK_DATA_DIR"
+echo "BK_CLUSTER_ROOT_PATH is $BK_CLUSTER_ROOT_PATH"
+
+
+mkdir -p "${BK_journalDirectory}" "${BK_ledgerDirectories}" "${BK_indexDirectories}"
+# -------------- #
+# Allow the container to be started with `--user`
+if [ "$1" = 'bookkeeper' -a "$(id -u)" = '0' ]; then
+    chown -R "$BK_USER:$BK_USER" "/opt/bookkeeper/" "${BK_journalDirectory}" "${BK_ledgerDirectories}" "${BK_indexDirectories}"
+    sudo -s -E -u "$BK_USER" /bin/bash "$0" "$@"
+    exit
+fi
+# -------------- #
+
+python apply-config-from-env.py /opt/bookkeeper/conf
+
+echo "wait for zookeeper"
+until /opt/bookkeeper/bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server ${BK_zkServers} ls /; do sleep 5; done
+
+echo "create the zk root dir for bookkeeper"
+/opt/zk/bin/zkCli.sh -server ${BK_zkServers} create ${BK_CLUSTER_ROOT_PATH}
+
+echo "format zk metadata"
+echo "please ignore the failure, if it has already been formatted, "
+export BOOKIE_CONF=/opt/bookkeeper/conf/bk_server.conf
+export SERVICE_PORT=$PORT0
+/opt/bookkeeper/bin/bookkeeper shell metaformat -n || true
+
+echo "run command by exec"
+exec "$@"
+
diff --git a/docker/scripts/healthcheck.sh b/docker/scripts/healthcheck.sh
new file mode 100755
index 0000000..87ce09d
--- /dev/null
+++ b/docker/scripts/healthcheck.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+#
+#/**
+# * Copyright 2007 The Apache Software Foundation
+# *
+# * Licensed to the Apache Software Foundation (ASF) under one
+# * or more contributor license agreements.  See the NOTICE file
+# * distributed with this work for additional information
+# * regarding copyright ownership.  The ASF licenses this file
+# * to you under the Apache License, Version 2.0 (the
+# * "License"); you may not use this file except in compliance
+# * with the License.  You may obtain a copy of the License at
+# *
+# *     http://www.apache.org/licenses/LICENSE-2.0
+# *
+# * Unless required by applicable law or agreed to in writing, software
+# * distributed under the License is distributed on an "AS IS" BASIS,
+# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# * See the License for the specific language governing permissions and
+# * limitations under the License.
+# */
+
+#!/bin/bash
+
+set -x -e -u
+
+# Sanity check that creates a ledger, writes a few entries, reads them and deletes the ledger.
+bookkeeper shell bookiesanity

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 08/10: ISSUE #427: [WEBSITE] sidebar doesn't work on documentation index page

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 004f4b5f6b3f3f29af8802d5c8cdac1604d27222
Author: Sijie Guo <si...@apache.org>
AuthorDate: Thu Aug 10 09:41:20 2017 +0800

    ISSUE #427: [WEBSITE] sidebar doesn't work on documentation index page
    
    Descriptions of the changes in this PR:
    
    sidebar uses `../../` for relative paths. in order to make this work, we need to move any pages under `docs` on level down and not use `index.md`.
    
    Author: Sijie Guo <si...@apache.org>
    
    Reviewers: Jia Zhai <None>, Luc Perkins <None>, Matteo Merli <None>
    
    This closes #428 from sijie/issue_427, closes #427
---
 site/_includes/navbar.html                              |  4 ++--
 site/docs/latest/{index.md => overview/overview.md}     | 14 +++++++-------
 site/docs/latest/{ => overview}/releaseNotes.md         |  0
 site/docs/latest/{ => overview}/releaseNotesTemplate.md |  0
 site/scripts/release.sh                                 |  2 +-
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/site/_includes/navbar.html b/site/_includes/navbar.html
index ceb9acb..bd8f408 100644
--- a/site/_includes/navbar.html
+++ b/site/_includes/navbar.html
@@ -31,7 +31,7 @@
       <div class="navbar-item has-dropdown is-hoverable">
         <a class="navbar-link">Documentation</a>
         <div class="navbar-dropdown is-boxed">
-          <a class="navbar-item" href="{{ site.baseurl }}docs/latest/index.html">
+          <a class="navbar-item" href="{{ site.baseurl }}docs/latest/overview/overview">
             Version {{ site.latest_version}}
             <span class="tag is-warning">Development</span>
           </a>
@@ -43,7 +43,7 @@
           </a>
           <hr class="dropdown-divider">
           {% for version in site.versions %}
-          <a class="navbar-item" href="{{ site.baseurl }}docs/{{version}}/index.html">
+          <a class="navbar-item" href="{{ site.baseurl }}docs/{{version}}/overview/overview">
             Release {{version}}
             {% if version == site.stable_release %}<span class="tag is-success">Stable</span>{% endif %}
           </a>
diff --git a/site/docs/latest/index.md b/site/docs/latest/overview/overview.md
similarity index 70%
rename from site/docs/latest/index.md
rename to site/docs/latest/overview/overview.md
index 39f4eb9..1c93193 100644
--- a/site/docs/latest/index.md
+++ b/site/docs/latest/overview/overview.md
@@ -34,23 +34,23 @@ It is suitable for being used in following scenerios:
 
 Learn more about Apache BookKeeper and what it can do for your organization:
 
-- [Apache BookKeeper {{ site.latest_version }} Release Notes](./releaseNotes)
+- [Apache BookKeeper {{ site.latest_version }} Release Notes](../releaseNotes)
 
 Or start using Apache BookKeeper today.
 
 ### Users 
 
-- **Concepts**: Start with [concepts](./getting-started/concepts). This will help you to fully understand
+- **Concepts**: Start with [concepts](../../getting-started/concepts). This will help you to fully understand
     the other parts of the documentation, including the setup, integration and operation guides.
-- **Getting Started**: Install [Apache BookKeeper](./getting-started/installation) and run bookies [locally](./getting-started/run-locally)
-- **API**: Read the [API](./api/overview) documentation to learn how to use Apache BookKeeper to build your applications.
-- **Deployment**: The [Deployment Guide](./deployment/manual) shows how to deploy Apache BookKeeper to production clusters.
+- **Getting Started**: Install [Apache BookKeeper](../../getting-started/installation) and run bookies [locally](../../getting-started/run-locally)
+- **API**: Read the [API](../../api/overview) documentation to learn how to use Apache BookKeeper to build your applications.
+- **Deployment**: The [Deployment Guide](../../deployment/manual) shows how to deploy Apache BookKeeper to production clusters.
 
 ### Administrators
 
-- **Operations**: The [Admin Guide](./admin) shows how to run Apache BookKeeper on production, what are the production
+- **Operations**: The [Admin Guide](../../admin/bookies) shows how to run Apache BookKeeper on production, what are the production
     considerations and best practices.
 
 ### Contributors
 
-- **Details**: Learn [design details](./development/protocol) to know more internals.
+- **Details**: Learn [design details](../../development/protocol) to know more internals.
diff --git a/site/docs/latest/releaseNotes.md b/site/docs/latest/overview/releaseNotes.md
similarity index 100%
rename from site/docs/latest/releaseNotes.md
rename to site/docs/latest/overview/releaseNotes.md
diff --git a/site/docs/latest/releaseNotesTemplate.md b/site/docs/latest/overview/releaseNotesTemplate.md
similarity index 100%
rename from site/docs/latest/releaseNotesTemplate.md
rename to site/docs/latest/overview/releaseNotesTemplate.md
diff --git a/site/scripts/release.sh b/site/scripts/release.sh
index 215964e..2dd4129 100755
--- a/site/scripts/release.sh
+++ b/site/scripts/release.sh
@@ -52,7 +52,7 @@ cd ${DOC_HOME}/docs/${RELEASE_VERSION}
 find . -name "*.md" | xargs sed -i'.bak' "s/{{ site\.latest_version }}/${RELEASE_VERSION}/"
 find . -name "*.md" | xargs sed -i'.bak' "s/${LATEST_VERSION}/${RELEASE_VERSION}/"
 find . -name "*.md.bak" | xargs rm
-cp releaseNotesTemplate.md releaseNotes.md
+cp overview/releaseNotesTemplate.md overview/releaseNotes.md
 
 # go to doc home
 

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 10/10: ISSUE #432: Add "Google Analytics" to the website

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 6028fe25b2bd41ad1c2fc5ba81bc109cd9de15a2
Author: Sijie Guo <si...@apache.org>
AuthorDate: Thu Aug 10 14:22:05 2017 -0700

    ISSUE #432: Add "Google Analytics" to the website
    
    Descriptions of the changes in this PR:
    
    Add "google analytics" script to the website for tracking the documentation traffic. We can learn the pattern and improve documentation.
    
    The google account for analytics is managed by bookkeeper pmc.
    
    Author: Sijie Guo <si...@apache.org>
    
    Reviewers: Matteo Merli <mm...@apache.org>
    
    This closes #433 from sijie/google_analytics, closes #432
---
 site/Makefile                        |  2 +-
 site/_includes/google-analytics.html | 26 ++++++++++++++++++++++++++
 site/_layouts/default.html           |  3 +++
 3 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/site/Makefile b/site/Makefile
index 3719f81..7108380 100644
--- a/site/Makefile
+++ b/site/Makefile
@@ -23,7 +23,7 @@ build: clean
 		--config _config.yml
 
 apache: clean
-	${JEKYLL} build \
+	JEKYLL_ENV=production ${JEKYLL} build \
 		--config _config.yml,_config.apache.yml
 
 javadoc:
diff --git a/site/_includes/google-analytics.html b/site/_includes/google-analytics.html
new file mode 100644
index 0000000..d081572
--- /dev/null
+++ b/site/_includes/google-analytics.html
@@ -0,0 +1,26 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+<script>
+  (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+  (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+  m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+  })(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
+
+  ga('create', 'UA-104419626-1', 'auto');
+  ga('send', 'pageview');
+
+</script>
diff --git a/site/_layouts/default.html b/site/_layouts/default.html
index 6906d6f..5e54466 100644
--- a/site/_layouts/default.html
+++ b/site/_layouts/default.html
@@ -14,4 +14,7 @@
   </body>
 
   {% include javascript.html %}
+  {% if jekyll.environment == "production" %}
+  {% include google-analytics.html %}
+  {% endif %}
 </html>

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 07/10: 406

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit 17721bc111dc69f1ddfba45857d0d1878bf300f3
Author: Francesco Caliumi - Diennea <fr...@diennea.com>
AuthorDate: Tue Aug 8 07:56:06 2017 +0800

    406
    
    Descriptions of the changes in this PR:
    - Fix non-privileged user execution
    - Change the Makefile in order to always use current user when run the bk image
    - Fix Makefile in order to run "make run-demo"
    - Adjust unneeded packages remove in Dockerfile
    
    Author: Francesco Caliumi - Diennea <fr...@diennea.com>
    
    Reviewers: Jia Zhai <None>, Matteo Merli <None>, Sijie Guo <None>
    
    This closes #406 from caiok/master
---
 docker/Dockerfile            | 4 ++--
 docker/Makefile              | 2 ++
 docker/scripts/entrypoint.sh | 2 +-
 3 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/docker/Dockerfile b/docker/Dockerfile
index 45d422e..5a22bb6 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -31,7 +31,7 @@ ENV BK_USER=bookkeeper
 # Download Apache Bookkeeper, untar and clean up
 RUN set -x \
     && adduser "${BK_USER}" \
-    && yum install -y java-1.8.0-openjdk-headless wget bash python md5sum sha1sum \
+    && yum install -y java-1.8.0-openjdk-headless wget bash python md5sum sha1sum sudo \
     && mkdir -pv /opt \
     && cd /opt \
     && wget -q "https://archive.apache.org/dist/bookkeeper/bookkeeper-${BK_VERSION}/${DISTRO_NAME}.tar.gz" \
@@ -45,7 +45,7 @@ RUN set -x \
     && tar -xzf "$DISTRO_NAME.tar.gz" \
     && mv bookkeeper-server-${BK_VERSION}/ /opt/bookkeeper/ \
     && rm -rf "$DISTRO_NAME.tar.gz" "$DISTRO_NAME.tar.gz.asc" "$DISTRO_NAME.tar.gz.md5" "$DISTRO_NAME.tar.gz.sha1" \
-    && yum remove -y wget \
+    && yum remove -y wget md5sum sha1sum \
     && yum clean all
 
 WORKDIR /opt/bookkeeper
diff --git a/docker/Makefile b/docker/Makefile
index 27c8ce6..64078db 100644
--- a/docker/Makefile
+++ b/docker/Makefile
@@ -38,6 +38,7 @@ BK_zkLedgersRootPath = /ledgers
 ZK_CONTAINER_NAME=test_zookeeper
 ZK_LOCAL_DATA_DIR=$(BK_LOCAL_DATA_DIR)/zookkeeper
 
+TERMINAL_EMULATOR=gnome-terminal
 
 CONTAINER_IP=$(shell docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(CONTAINER_NAME))
 
@@ -78,6 +79,7 @@ run-bk:
 	    --volume $(BK_LOCAL_CONTAINER_DATA_DIR)/journal:$(BK_DIR)/journal \
 	    --volume $(BK_LOCAL_CONTAINER_DATA_DIR)/ledger:$(BK_DIR)/ledger \
 	    --volume $(BK_LOCAL_CONTAINER_DATA_DIR)/index:$(BK_DIR)/index \
+            --user "$(id -u)" \
 	    --name "$(CONTAINER_NAME)" \
 	    --hostname "$(CONTAINER_NAME)" \
 	    --env BK_zkServers=$(ZK_CONTAINER_NAME):2181 \
diff --git a/docker/scripts/entrypoint.sh b/docker/scripts/entrypoint.sh
index 310970a..dffbc93 100755
--- a/docker/scripts/entrypoint.sh
+++ b/docker/scripts/entrypoint.sh
@@ -46,7 +46,7 @@ echo "BK_CLUSTER_ROOT_PATH is $BK_CLUSTER_ROOT_PATH"
 mkdir -p "${BK_journalDirectory}" "${BK_ledgerDirectories}" "${BK_indexDirectories}"
 # -------------- #
 # Allow the container to be started with `--user`
-if [ "$1" = 'bookkeeper' -a "$(id -u)" = '0' ]; then
+if [ "$1" = '/opt/bookkeeper/bin/bookkeeper' -a "$(id -u)" = '0' ]; then
     chown -R "$BK_USER:$BK_USER" "/opt/bookkeeper/" "${BK_journalDirectory}" "${BK_ledgerDirectories}" "${BK_indexDirectories}"
     sudo -s -E -u "$BK_USER" /bin/bash "$0" "$@"
     exit

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.

[bookkeeper] 01/10: ISSUE #397: [CI] publish-website job failed when mvn:release bump version to 4.6.0-SNAPSHOT

Posted by si...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch branch-4.5
in repository https://gitbox.apache.org/repos/asf/bookkeeper.git

commit ebd02a5b802921e7904b211fc6244f1ce9bc4a33
Author: Sijie Guo <si...@apache.org>
AuthorDate: Sat Aug 5 15:13:02 2017 -0700

    ISSUE #397: [CI] publish-website job failed when mvn:release bump version to 4.6.0-SNAPSHOT
    
    Descriptions of the changes in this PR:
    
    Use `mvn install` rather than `mvn compile`
    
    Author: Sijie Guo <si...@apache.org>
    
    Reviewers: Matteo Merli <None>
    
    This closes #398 from sijie/issue_397, closes #397
---
 site/scripts/javadoc-gen.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/site/scripts/javadoc-gen.sh b/site/scripts/javadoc-gen.sh
index b92a071..f43795e 100755
--- a/site/scripts/javadoc-gen.sh
+++ b/site/scripts/javadoc-gen.sh
@@ -5,6 +5,6 @@ source scripts/common.sh
 (
   rm -rf $JAVADOC_GEN_DIR $JAVADOC_DEST_DIR
   cd $ROOT_DIR
-  mvn compile javadoc:aggregate
+  mvn clean install javadoc:aggregate -DskipTests
   mv $JAVADOC_GEN_DIR $JAVADOC_DEST_DIR
 )

-- 
To stop receiving notification emails like this one, please contact
"commits@bookkeeper.apache.org" <co...@bookkeeper.apache.org>.