You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by ie...@apache.org on 2020/07/09 23:13:11 UTC

[james-project] branch JAMES-3302-migrate-content updated (ac4c21e -> a87f0bc)

This is an automated email from the ASF dual-hosted git repository.

ieugen pushed a change to branch JAMES-3302-migrate-content
in repository https://gitbox.apache.org/repos/asf/james-project.git.


 discard ac4c21e  [JAMES-3302] Migrate ADR docs to asciidoc using markdown
     new a87f0bc  [JAMES-3302] Migrate ADR and markdown site docs to asciidoc using kramdoc

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (ac4c21e)
            \
             N -- N -- N   refs/heads/JAMES-3302-migrate-content (a87f0bc)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../adr/0001-record-architecture-decisions.md.adoc |    0
 .../adr/0002-make-taskmanager-distributed.md.adoc  |    0
 .../adr/0003-distributed-workqueue.md.adoc         |    0
 .../adr/0004-distributed-tasks-listing.md.adoc     |    0
 ...ributed-task-termination-ackowledgement.md.adoc |    0
 .../adr/0006-task-serialization.md.adoc            |    0
 .../adr/0007-distributed-task-cancellation.md.adoc |    0
 .../adr/0008-distributed-task-await.md.adoc        |    0
 ...9-disable-elasticsearch-dynamic-mapping.md.adoc |    0
 .../adr/0009-java-11-migration.md.adoc             |    0
 .../adr/0010-enable-elasticsearch-routing.md.adoc  |    0
 ...11-remove-elasticsearch-document-source.md.adoc |    0
 .../adr/0012-jmap-partial-reads.md.adoc            |    0
 .../adr/0013-precompute-jmap-preview.md.adoc       |    0
 .../adr/0014-blobstore-storage-policies.md.adoc    |    0
 .../adr/0015-objectstorage-blobid-list.md.adoc     |    0
 .../adr/0016-distributed-workqueue.md.adoc         |    0
 .../adr/0017-file-mail-queue-deprecation.md.adoc   |    0
 .../src => pages}/adr/0018-jmap-new-specs.md.adoc  |    0
 .../adr/0019-reactor-netty-adoption.md.adoc        |    0
 ...20-cassandra-mailbox-object-consistency.md.adoc |    0
 .../adr/0021-cassandra-acl-inconsistency.md.adoc   |    0
 .../0022-cassandra-message-inconsistency.md.adoc   |    0
 ...sandra-mailbox-counters-inconsistencies.md.adoc |    0
 .../adr/0024-polyglot-strategy.md.adoc             |    0
 .../adr/0025-cassandra-blob-store-cache.md.adoc    |    0
 ...-configured-additional-mailboxListeners.md.adoc |    0
 ...7-eventBus-error-handling-upon-dispatch.md.adoc |    0
 .../adr/0028-Recompute-mailbox-quotas.md.adoc      |    0
 ...0029-Cassandra-mailbox-deletion-cleanup.md.adoc |    0
 ...eparate-attachment-content-and-metadata.md.adoc |    0
 .../adr/0031-distributed-mail-queue.md.adoc        |    0
 .../0032-distributed-mail-queue-cleanup.md.adoc    |    0
 ...033-use-scala-in-event-sourcing-modules.md.adoc |    0
 .../0034-mailbox-api-visibility-and-usage.md.adoc  |    0
 ...035-distributed-listeners-configuration.md.adoc |    0
 ...conditional-statements-in-guice-modules.md.adoc |    0
 .../{adr/src => pages}/adr/0037-eventbus.md.adoc   |    0
 .../adr/0038-distributed-eventbus.md.adoc          |    0
 ...0039-distributed-blob-garbage-collector.md.adoc |    0
 docs/modules/{development => migrated}/nav.adoc    |    0
 .../migrated/pages/mailet/quickstart.md.adoc       |   32 +
 .../migrated/pages/mailet/release-notes.md.adoc    |   13 +
 .../install/guice-cassandra-rabbitmq-swift.md.adoc |   97 +
 .../pages/server/install/guice-cassandra.md.adoc   |   74 +
 .../pages/server/install/guice-jpa-smtp.md.adoc    |   48 +
 .../pages/server/install/guice-jpa.md.adoc         |   55 +
 .../migrated/pages/server/manage-cli.md.adoc       |  252 +-
 .../server/manage-guice-distributed-james.md.adoc  |  597 +++
 .../migrated/pages/server/manage-webadmin.md.adoc  | 3945 ++++++++++++++++++++
 migrate-adr.sh                                     |    8 -
 migrate-markdown.sh                                |   15 +
 52 files changed, 5008 insertions(+), 128 deletions(-)
 rename docs/modules/development/{adr/src => pages}/adr/0001-record-architecture-decisions.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0002-make-taskmanager-distributed.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0003-distributed-workqueue.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0004-distributed-tasks-listing.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0005-distributed-task-termination-ackowledgement.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0006-task-serialization.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0007-distributed-task-cancellation.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0008-distributed-task-await.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0009-disable-elasticsearch-dynamic-mapping.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0009-java-11-migration.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0010-enable-elasticsearch-routing.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0011-remove-elasticsearch-document-source.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0012-jmap-partial-reads.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0013-precompute-jmap-preview.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0014-blobstore-storage-policies.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0015-objectstorage-blobid-list.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0016-distributed-workqueue.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0017-file-mail-queue-deprecation.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0018-jmap-new-specs.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0019-reactor-netty-adoption.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0020-cassandra-mailbox-object-consistency.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0021-cassandra-acl-inconsistency.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0022-cassandra-message-inconsistency.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0023-cassandra-mailbox-counters-inconsistencies.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0024-polyglot-strategy.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0025-cassandra-blob-store-cache.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0026-removing-configured-additional-mailboxListeners.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0027-eventBus-error-handling-upon-dispatch.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0028-Recompute-mailbox-quotas.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0029-Cassandra-mailbox-deletion-cleanup.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0030-separate-attachment-content-and-metadata.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0031-distributed-mail-queue.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0032-distributed-mail-queue-cleanup.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0033-use-scala-in-event-sourcing-modules.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0034-mailbox-api-visibility-and-usage.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0035-distributed-listeners-configuration.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0036-against-use-of-conditional-statements-in-guice-modules.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0037-eventbus.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0038-distributed-eventbus.md.adoc (100%)
 rename docs/modules/development/{adr/src => pages}/adr/0039-distributed-blob-garbage-collector.md.adoc (100%)
 copy docs/modules/{development => migrated}/nav.adoc (100%)
 create mode 100644 docs/modules/migrated/pages/mailet/quickstart.md.adoc
 create mode 100644 docs/modules/migrated/pages/mailet/release-notes.md.adoc
 create mode 100644 docs/modules/migrated/pages/server/install/guice-cassandra-rabbitmq-swift.md.adoc
 create mode 100644 docs/modules/migrated/pages/server/install/guice-cassandra.md.adoc
 create mode 100644 docs/modules/migrated/pages/server/install/guice-jpa-smtp.md.adoc
 create mode 100644 docs/modules/migrated/pages/server/install/guice-jpa.md.adoc
 copy src/site/markdown/server/manage-cli.md => docs/modules/migrated/pages/server/manage-cli.md.adoc (58%)
 create mode 100644 docs/modules/migrated/pages/server/manage-guice-distributed-james.md.adoc
 create mode 100644 docs/modules/migrated/pages/server/manage-webadmin.md.adoc
 delete mode 100755 migrate-adr.sh
 create mode 100755 migrate-markdown.sh


---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org


[james-project] 01/01: [JAMES-3302] Migrate ADR and markdown site docs to asciidoc using kramdoc

Posted by ie...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

ieugen pushed a commit to branch JAMES-3302-migrate-content
in repository https://gitbox.apache.org/repos/asf/james-project.git

commit a87f0bc93aa1755636a878a3457647c684bc36f4
Author: Eugen Stan <ie...@apache.org>
AuthorDate: Fri Jul 10 01:37:46 2020 +0300

    [JAMES-3302] Migrate ADR and markdown site docs to asciidoc using kramdoc
---
 .../adr/0001-record-architecture-decisions.md.adoc |   39 +
 .../adr/0002-make-taskmanager-distributed.md.adoc  |   25 +
 .../pages/adr/0003-distributed-workqueue.md.adoc   |   29 +
 .../adr/0004-distributed-tasks-listing.md.adoc     |   20 +
 ...ributed-task-termination-ackowledgement.md.adoc |   24 +
 .../pages/adr/0006-task-serialization.md.adoc      |   30 +
 .../adr/0007-distributed-task-cancellation.md.adoc |   21 +
 .../pages/adr/0008-distributed-task-await.md.adoc  |   30 +
 ...9-disable-elasticsearch-dynamic-mapping.md.adoc |   40 +
 .../pages/adr/0009-java-11-migration.md.adoc       |   23 +
 .../adr/0010-enable-elasticsearch-routing.md.adoc  |   41 +
 ...11-remove-elasticsearch-document-source.md.adoc |   37 +
 .../pages/adr/0012-jmap-partial-reads.md.adoc      |   50 +
 .../pages/adr/0013-precompute-jmap-preview.md.adoc |   56 +
 .../adr/0014-blobstore-storage-policies.md.adoc    |   63 +
 .../adr/0015-objectstorage-blobid-list.md.adoc     |   68 +
 .../pages/adr/0016-distributed-workqueue.md.adoc   |   29 +
 .../adr/0017-file-mail-queue-deprecation.md.adoc   |   43 +
 .../pages/adr/0018-jmap-new-specs.md.adoc          |   65 +
 .../pages/adr/0019-reactor-netty-adoption.md.adoc  |   40 +
 ...20-cassandra-mailbox-object-consistency.md.adoc |   73 +
 .../adr/0021-cassandra-acl-inconsistency.md.adoc   |   63 +
 .../0022-cassandra-message-inconsistency.md.adoc   |   89 +
 ...sandra-mailbox-counters-inconsistencies.md.adoc |   58 +
 .../pages/adr/0024-polyglot-strategy.md.adoc       |  179 +
 .../adr/0025-cassandra-blob-store-cache.md.adoc    |   69 +
 ...-configured-additional-mailboxListeners.md.adoc |   72 +
 ...7-eventBus-error-handling-upon-dispatch.md.adoc |   35 +
 .../adr/0028-Recompute-mailbox-quotas.md.adoc      |   46 +
 ...0029-Cassandra-mailbox-deletion-cleanup.md.adoc |   46 +
 ...eparate-attachment-content-and-metadata.md.adoc |   94 +
 .../pages/adr/0031-distributed-mail-queue.md.adoc  |  122 +
 .../0032-distributed-mail-queue-cleanup.md.adoc    |   50 +
 ...033-use-scala-in-event-sourcing-modules.md.adoc |   33 +
 .../0034-mailbox-api-visibility-and-usage.md.adoc  |   49 +
 ...035-distributed-listeners-configuration.md.adoc |  137 +
 ...conditional-statements-in-guice-modules.md.adoc |  112 +
 .../development/pages/adr/0037-eventbus.md.adoc    |   59 +
 .../pages/adr/0038-distributed-eventbus.md.adoc    |   44 +
 ...0039-distributed-blob-garbage-collector.md.adoc |  687 ++++
 docs/modules/migrated/nav.adoc                     |    1 +
 .../migrated/pages/mailet/quickstart.md.adoc       |   32 +
 .../migrated/pages/mailet/release-notes.md.adoc    |   13 +
 .../install/guice-cassandra-rabbitmq-swift.md.adoc |   97 +
 .../pages/server/install/guice-cassandra.md.adoc   |   74 +
 .../pages/server/install/guice-jpa-smtp.md.adoc    |   48 +
 .../pages/server/install/guice-jpa.md.adoc         |   55 +
 .../migrated/pages/server/manage-cli.md.adoc       |  337 ++
 .../server/manage-guice-distributed-james.md.adoc  |  597 +++
 .../migrated/pages/server/manage-webadmin.md.adoc  | 3945 ++++++++++++++++++++
 migrate-markdown.sh                                |   15 +
 .../0009-disable-elasticsearch-dynamic-mapping.md  |   16 +-
 52 files changed, 8112 insertions(+), 8 deletions(-)

diff --git a/docs/modules/development/pages/adr/0001-record-architecture-decisions.md.adoc b/docs/modules/development/pages/adr/0001-record-architecture-decisions.md.adoc
new file mode 100644
index 0000000..dac5685
--- /dev/null
+++ b/docs/modules/development/pages/adr/0001-record-architecture-decisions.md.adoc
@@ -0,0 +1,39 @@
+= 1. [JAMES-2909] Record architecture decisions
+
+Date: 2019-10-02
+
+== Status
+
+Proposed
+
+== Context
+
+In order to be more community-oriented, we should adopt a process to have a structured way to have open architectural decisions.
+
+Using an Architectural Decision Records-based process as a support of discussion on the developers mailing-lists.
+
+== Decision
+
+We will use Architecture Decision Records, as https://web.archive.org/web/20190824074401/http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions[described by Michael Nygard].
+
+Each ADR will be discussed on the Apache James' developers mailing-list before being accepted.
+
+Following https://community.apache.org/committers/decisionMaking.html[Apache Decision Making process], we provide the following possible status, with their associated meaning:
+
+* `Proposed`: The decision is being discussed on the mailing list.
+* `Accepted (lazy consensus)` : the architecture decision was proposed on the mailing list, and a consensus emerged from people involved in the discussion on the mailing list.
+* `Accepted (voted)` : the architecture undergo a voting process.
+* `Rejected` : Consensus built up against that proposal.
+
+== Consequences
+
+See Michael Nygard's article, linked above.
+For a lightweight ADR toolset, see Nat Pryce's https://github.com/npryce/adr-tools[adr-tools].
+
+We should provide in a mutable `References` section links to related JIRA meta-ticket (not necessarily to all related sub-tickets) as well as a link to the mail archive discussion thread.
+
+JIRA tickets implementing that architecture decision should also link the related Architecture Decision Record.
+
+== References
+
+* https://jira.apache.org/jira/browse/JAMES-2909[JAMES-2909]
diff --git a/docs/modules/development/pages/adr/0002-make-taskmanager-distributed.md.adoc b/docs/modules/development/pages/adr/0002-make-taskmanager-distributed.md.adoc
new file mode 100644
index 0000000..0ce4ce2
--- /dev/null
+++ b/docs/modules/development/pages/adr/0002-make-taskmanager-distributed.md.adoc
@@ -0,0 +1,25 @@
+= 2. Make TaskManager Distributed
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+In order to have a distributed version of James we need to have an homogeneous way to deal with `Task`.
+
+Currently, every James nodes of a cluster have their own instance of `TaskManager` and they have no knowledge of others, making it impossible to orchestrate task execution at the cluster level.
+Tasks are scheduled and ran on the same node they are scheduled.
+
+We are also unable to list or access to the details of all the ``Task``s of a cluster.
+
+== Decision
+
+Create a distribution-aware implementation of `TaskManager`.
+
+== Consequences
+
+* Split the `TaskManager` part dealing with the coordination (`Task` management and view) and the `Task` execution (located in `TaskManagerWorker`)
+* The distributed `TaskManager` will rely on RabbitMQ to coordinate and the event system to synchronize states
diff --git a/docs/modules/development/pages/adr/0003-distributed-workqueue.md.adoc b/docs/modules/development/pages/adr/0003-distributed-workqueue.md.adoc
new file mode 100644
index 0000000..9e7ffa4
--- /dev/null
+++ b/docs/modules/development/pages/adr/0003-distributed-workqueue.md.adoc
@@ -0,0 +1,29 @@
+= 3. Distributed WorkQueue
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+Superceded by xref:0016-distributed-workqueue.adoc[16.
+Distributed WorkQueue]
+
+== Context
+
+By switching the task manager to a distributed implementation, we need to be able to run a `Task` on any node of the cluster.
+
+== Decision
+
+For the time being we will keep the sequential execution property of the task manager.
+This is an intermediate milestone toward the final implementation which will drop this property.
+
+* Use a RabbitMQ queue as a workqueue where only the `Created` events are pushed into.
+This queue will be exclusive and events will be consumed serially.
+Technically this means the queue will be consumed with a `prefetch = 1`.
+The queue will listen to the worker on the same node and will ack the message only once it is finished (`Completed`, `Failed`, `Cancelled`).
+
+== Consequences
+
+* It's a temporary and not safe to use in production solution: if the node promoted to exclusive listener of the queue dies, no more tasks will be run
+* The serial execution of tasks does not leverage cluster scalability.
diff --git a/docs/modules/development/pages/adr/0004-distributed-tasks-listing.md.adoc b/docs/modules/development/pages/adr/0004-distributed-tasks-listing.md.adoc
new file mode 100644
index 0000000..917b003
--- /dev/null
+++ b/docs/modules/development/pages/adr/0004-distributed-tasks-listing.md.adoc
@@ -0,0 +1,20 @@
+= 4. Distributed Tasks listing
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+By switching the task manager to a distributed implementation, we need to be able to `list` all ``Task``s running on the cluster.
+
+== Decision
+
+* Read a Cassandra projection to get all ``Task``s and their `Status`
+
+== Consequences
+
+* A Cassandra projection has to be done
+* The `EventSourcingSystem` should have a `Listener` updating the `Projection`
diff --git a/docs/modules/development/pages/adr/0005-distributed-task-termination-ackowledgement.md.adoc b/docs/modules/development/pages/adr/0005-distributed-task-termination-ackowledgement.md.adoc
new file mode 100644
index 0000000..87ff4f8
--- /dev/null
+++ b/docs/modules/development/pages/adr/0005-distributed-task-termination-ackowledgement.md.adoc
@@ -0,0 +1,24 @@
+= 5. Distributed Task termination ackowledgement
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+By switching the task manager to a distributed implementation, we need to be able to execute a `Task` on any node of the cluster.
+We need a way for nodes to be signaled of any termination event so that we can notify blocking clients.
+
+== Decision
+
+* Creating a `RabbitMQEventHandler` which publish ``Event``s pushed to the task manager's event system to RabbitMQ
+* All the events which end a `Task` (`Completed`, `Failed`, and `Canceled`) have to be transmitted to other nodes
+
+== Consequences
+
+* A new kind of ``Event``s should be created: `TerminationEvent` which includes `Completed`, `Failed`, and `Canceled`
+* ``TerminationEvent``s will be broadcasted on an exchange which will be bound to all interested components later
+* `EventSourcingSystem.dipatch` should use `RabbitMQ` to dispatch ``Event``s instead of triggering local ``Listener``s
+* Any node can be notified when a `Task` emits a termination event
diff --git a/docs/modules/development/pages/adr/0006-task-serialization.md.adoc b/docs/modules/development/pages/adr/0006-task-serialization.md.adoc
new file mode 100644
index 0000000..bae833f
--- /dev/null
+++ b/docs/modules/development/pages/adr/0006-task-serialization.md.adoc
@@ -0,0 +1,30 @@
+= 6. Task serialization
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+By switching the task manager to a distributed implementation, we need to be able to execute a `Task` on any node of the cluster.
+We need to have a way to describe the `Task` to be executed and serialize it in order to be able to store it in the `Created` event.
+Which will be persisted in the Event Store, and will be send in the event bus.
+
+At this point in time a `Task` can contain any arbitrary code.
+It's not an element of a finite set of actions.
+
+== Decision
+
+* Create a `Factory` for one `Task`
+* Inject a `Factory` `Registry` via a Guice Module
+* The `Task` `Serialization` will be done in JSON, We will get inspired by `EventSerializer`
+* Every ``Task``s should have a specific integration test demonstrating that serialization works
+* Each `Task` is responsible of eventually dealing with the different versions of the serialized information
+
+== Consequences
+
+* Every ``Task``s should be serializable.
+* Every ``Task``s should provide a `Factory` which would be responsible to deserialize the task and instantiate it.
+* Every `Factory` should be registered through a Guice module to be created for each project containing a `Factory`
diff --git a/docs/modules/development/pages/adr/0007-distributed-task-cancellation.md.adoc b/docs/modules/development/pages/adr/0007-distributed-task-cancellation.md.adoc
new file mode 100644
index 0000000..7e0f6b1
--- /dev/null
+++ b/docs/modules/development/pages/adr/0007-distributed-task-cancellation.md.adoc
@@ -0,0 +1,21 @@
+= 7. Distributed Task cancellation
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+A `Task` could be run on any node of the cluster.
+To interrupt it we need to notify all nodes of the cancel request.
+
+== Decision
+
+* We will add an EventHandler to broadcast the `CancelRequested` event to all the workers listening on a RabbitMQ broadcasting exchange.
+* The `TaskManager` should register to the exchange and will apply `cancel` on the `TaskManagerWorker` if the `Task` is waiting or in progress on it.
+
+== Consequences
+
+* The task manager's event system should be bound to the RabbitMQ exchange which publish the ``TerminationEvent``s
diff --git a/docs/modules/development/pages/adr/0008-distributed-task-await.md.adoc b/docs/modules/development/pages/adr/0008-distributed-task-await.md.adoc
new file mode 100644
index 0000000..06424d5
--- /dev/null
+++ b/docs/modules/development/pages/adr/0008-distributed-task-await.md.adoc
@@ -0,0 +1,30 @@
+= 8. Distributed Task await
+
+Date: 2019-10-02
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+By switching the task manager to a distributed implementation, we need to be able to `await` a `Task` running on any node of the cluster.
+
+== Decision
+
+* Broadcast ``Event``s in `RabbitMQ`
+
+== Consequences
+
+* {blank}
++
+[cols=3*]
+|===
+| `RabbitMQTaskManager` should broadcast termination ``Event``s (`Completed`
+| `Failed`
+| `Canceled`)
+|===
+
+* `RabbitMQTaskManager.await` should: first, check the ``Task``'s state;
+and if it's not terminated, listen to RabbitMQ
+* The await should have a timeout limit
diff --git a/docs/modules/development/pages/adr/0009-disable-elasticsearch-dynamic-mapping.md.adoc b/docs/modules/development/pages/adr/0009-disable-elasticsearch-dynamic-mapping.md.adoc
new file mode 100644
index 0000000..b82e965
--- /dev/null
+++ b/docs/modules/development/pages/adr/0009-disable-elasticsearch-dynamic-mapping.md.adoc
@@ -0,0 +1,40 @@
+= 9. Disable ElasticSearch dynamic mapping
+
+Date: 2019-10-10
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+We rely on dynamic mappings to expose our mail headers as a JSON map.
+Dynamic mapping is enabled for adding not yet encountered headers in the mapping.
+
+This causes a serie of functional issues:
+
+* Maximum field count can easily be exceeded
+* Field type 'guess' can be wrong, leading to subsequent headers omissions (see JAMES-2078)
+* Document indexation needs to be paused at the index level during mapping changes to avoid concurrent changes, impacting negatively performance.
+
+== Decision
+
+Rely on nested objects to represent mail headers within a mapping
+
+== Consequences
+
+The index needs to be re-created.
+Document reIndexation is needed.
+
+This solves the aforementionned bugs (see JAMES-2078).
+
+Regarding performance:
+
+* Default message list performance is unimpacted
+* We noticed a 4% performance improvment upon indexing throughput
+* We noticed a 7% increase regarding space per message
+
+== References
+
+* https://github.com/linagora/james-project/pull/2726[JAMES-2078] JAMES-2078 Add an integration test to prove that dynamic mapping can lead to ignored header fields
+* https://issues.apache.org/jira/browse/JAMES-2078[JIRA]
diff --git a/docs/modules/development/pages/adr/0009-java-11-migration.md.adoc b/docs/modules/development/pages/adr/0009-java-11-migration.md.adoc
new file mode 100644
index 0000000..4960085
--- /dev/null
+++ b/docs/modules/development/pages/adr/0009-java-11-migration.md.adoc
@@ -0,0 +1,23 @@
+= 9. Migration to Java Runtime Environment 11
+
+Date: 2019-10-24
+
+== Status
+
+Proposed
+
+== Context
+
+Java 11 is the only "Long Term Support" java release right now so more and more people will use it exclusively.
+
+James is known to build with Java Compiler 11 for some weeks.
+
+== Decision
+
+We adopt Java Runtime Environment 11 for James as a runtime to benefits from a supported runtime and new features of the languages and the platform.
+
+== Consequences
+
+* It requires the upgrade of Spring to 4.3.x.
+* All docker images should be updated to adoptopenjdk 11.
+* The documentation should be updated accordingly.
diff --git a/docs/modules/development/pages/adr/0010-enable-elasticsearch-routing.md.adoc b/docs/modules/development/pages/adr/0010-enable-elasticsearch-routing.md.adoc
new file mode 100644
index 0000000..bbac6c2
--- /dev/null
+++ b/docs/modules/development/pages/adr/0010-enable-elasticsearch-routing.md.adoc
@@ -0,0 +1,41 @@
+= 10 Enable ElasticSearch routing
+
+Date: 2019-10-17
+
+== Status
+
+Accepted (lazy consensus)
+
+Additional performance testing is required for adoption.
+
+== Context
+
+Our queries are mostly bounded to a mailbox or a user.
+We can easily limit the number of ElasticSearch nodes involved in a given query by grouping the underlying documents on the same node using a routing key.
+
+Without a routing key, each shard needs to execute the query.
+The coordinator needs also to be waiting for the slowest shard.
+
+Using the routing key unlocks significant throughput enhancement (proportional to the number of shards) and also a possible high percentile latency enhancement.
+
+As most requests are restricted to a single coordination, most search requests will hit a single shard, as opposed to non routed searches which would have hit each shards  (each shard would return the number of searched documents, to be ordered and limited  again in the coordination node).
+This allows to be more linearly scalable.
+
+== Decision
+
+Enable ElasticSearch routing.
+
+Messages should be indexed by mailbox.
+
+Quota Ratio should be indexed by user.
+
+== Consequences
+
+A data reindex is needed.
+
+On a single ElasticSearch node with 5 shards, we noticed latency reduction for mailbox search (2x mean time and 3x 99  percentile reduction)
+
+== References
+
+* https://www.elastic.co/guide/en/elasticsearch/reference/6.3/mapping-routing-field.html
+* https://issues.apache.org/jira/browse/JAMES-2917[JIRA]
diff --git a/docs/modules/development/pages/adr/0011-remove-elasticsearch-document-source.md.adoc b/docs/modules/development/pages/adr/0011-remove-elasticsearch-document-source.md.adoc
new file mode 100644
index 0000000..bf5ff12
--- /dev/null
+++ b/docs/modules/development/pages/adr/0011-remove-elasticsearch-document-source.md.adoc
@@ -0,0 +1,37 @@
+= 11. Disable ElasticSearch source
+
+Date: 2019-10-17
+
+== Status
+
+Rejected
+
+The benefits do not outweigh the costs.
+
+== Context
+
+Though very handy to have around, the source field does incur storage overhead within the index.
+
+== Decision
+
+Disable `_source` for ElasticSearch indexed documents.
+
+== Consequences
+
+Given a dataset composed of small text/plain messages, we notice a 20% space reduction of data stored on ElasticSearch.
+
+However, patch updates can no longer be performed upon flags updates.
+Upon flag update we need to fully read the mail  content, then mime-parse it, potentially html parse it, extract attachment content again and finally index again the full  document.
+
+Without `_source` field, flags update is two times slower, 99 percentile 4 times slower, and this impact negatively other  requests.
+
+Note please that `_source` allows admin flexibility like performing index level changes without downtime, amongst others:
+
+* Increase shards
+* Modifying replication factor
+* Changing analysers (IE allows an admin to configure FR analyser instead of EN analyser)
+
+== References
+
+* https://www.elastic.co/guide/en/elasticsearch/reference/6.3/mapping-source-field.html
+* https://issues.apache.org/jira/browse/JAMES-2906[JIRA]
diff --git a/docs/modules/development/pages/adr/0012-jmap-partial-reads.md.adoc b/docs/modules/development/pages/adr/0012-jmap-partial-reads.md.adoc
new file mode 100644
index 0000000..d78610e
--- /dev/null
+++ b/docs/modules/development/pages/adr/0012-jmap-partial-reads.md.adoc
@@ -0,0 +1,50 @@
+= 12. Projections for JMAP Messages
+
+Date: 2019-10-09
+
+== Status
+
+Accepted
+
+== Context
+
+JMAP core RFC8620 requires that the server responds only properties requested by the client.
+
+James currently computes all of the properties regardless of their cost, and if it had been asked by the client.
+
+Clearly we can save some latencies and resources by avoiding reading/computing expensive properties that had not been explicitly requested by the client.
+
+== Decision
+
+Introduce two new datastructures representing JMAP messages:
+
+* One with only metadata
+* One with metadata + headers
+
+Given the properties requested by the client, the most appropriate message datastructure will be computed, on top of  existing message storage APIs that should remain unchanged.
+
+Some performance tests will be run in order to evaluate the improvements.
+
+== Consequences
+
+GetMessages with a limited set of requested properties no longer result necessarily in full database message read.
+We thus have a significant improvement, for instance when only metadata are requested.
+
+Given the following scenario played by 5000 users per hour (constant rate)
+
+* Authenticate
+* List mailboxes
+* List messages in one of their mailboxes
+* Get 10 times the mailboxIds and keywords of the given messages
+
+We went from:
+
+* A 20% failure and timeout rate before this change to no failure
+* Mean time for GetMessages went from 27 159 ms to 27 ms (1000 time improvment), for all operation from  27 591 ms to 60 ms (460 time improvment)
+* P99 is a metric that did not make sense because the initial simulation exceeded Gatling (the performance measuring tool   we use) timeout (60s) at the p50 percentile.
+After this proposal p99 for the entire scenario is of 1 383 ms
+
+== References
+
+* /get method: https://tools.ietf.org/html/rfc8620#section-5.1
+* https://issues.apache.org/jira/browse/JAMES-2919[JIRA]
diff --git a/docs/modules/development/pages/adr/0013-precompute-jmap-preview.md.adoc b/docs/modules/development/pages/adr/0013-precompute-jmap-preview.md.adoc
new file mode 100644
index 0000000..4758f08
--- /dev/null
+++ b/docs/modules/development/pages/adr/0013-precompute-jmap-preview.md.adoc
@@ -0,0 +1,56 @@
+= 13. Precompute JMAP Email preview
+
+Date: 2019-10-09
+
+== Status
+
+Accepted
+
+== Context
+
+JMAP messages have a handy preview property displaying the firsts 256 characters of meaningful test of a message.
+
+This property is often displayed for message listing in JMAP clients, thus it is queried a lot.
+
+Currently, to get the preview, James retrieves the full message body, parse it using MIME parsers, removes HTML and keep meaningful text.
+
+== Decision
+
+We should pre-compute message preview.
+
+A MailboxListener will compute the preview and store it in a MessagePreviewStore.
+
+We should have a Cassandra and memory implementation.
+
+When the preview is precomputed then for these messages we can consider the "preview" property as a metadata.
+
+When the preview is not precomputed then we should compute the preview for these messages, and save the result for later.
+
+We should provide a webAdmin task allowing to rebuild the projection.
+The computing and storing in MessagePreviewStore  is idempotent and the task can be run in live without any concurrency problem.
+
+Some performance tests will be run in order to evaluate the improvements.
+
+== Consequences
+
+Given the following scenario played by 2500 users per hour (constant rate)
+
+* Authenticate
+* List mailboxes
+* List messages in one of their mailboxes
+* Get 8 times the properties expected to be fast to fetch with JMAP
+
+We went from:
+
+* A 7% failure and timeout rate before this change to almost no failure
+* Mean time for GetMessages went from 9 710 ms to 434 ms (22 time improvment), for all operation from  12 802 ms to 407 ms (31 time improvment)
+* P99 is a metric that did not make sense because the initial simulation exceeded Gatling (the performance measuring tool   we use) timeout (60s) at the p95 percentile.
+After this proposal p99 for the entire scenario is of 1 747 ms
+
+As such, this changeset significantly increases the JMAP performance.
+
+== References
+
+* https://jmap.io/server.html#1-emails JMAP client guice states that preview needs to be quick to retrieve
+* Similar decision had been taken at FastMail: https://fastmail.blog/2014/12/15/dec-15-putting-the-fast-in-fastmail-loading-your-mailbox-quickly/
+* https://issues.apache.org/jira/browse/JAMES-2919[JIRA]
diff --git a/docs/modules/development/pages/adr/0014-blobstore-storage-policies.md.adoc b/docs/modules/development/pages/adr/0014-blobstore-storage-policies.md.adoc
new file mode 100644
index 0000000..819109d
--- /dev/null
+++ b/docs/modules/development/pages/adr/0014-blobstore-storage-policies.md.adoc
@@ -0,0 +1,63 @@
+= 14. Add storage policies for BlobStore
+
+Date: 2019-10-09
+
+== Status
+
+Proposed
+
+Adoption needs to be backed by some performance tests, as well as data repartition between Cassandra and object storage shifts.
+
+== Context
+
+James exposes a simple BlobStore API for storing raw data.
+However such raw data often vary in size and access patterns.
+
+As an example:
+
+* Mailbox message headers are expected to be small and frequently accessed
+* Mailbox message body are expected to have sizes ranging from small to big but are unfrequently accessed
+* DeletedMessageVault message headers are expected to be small and unfrequently accessed
+
+Also, the capabilities of the various implementations of BlobStore have different strengths:
+
+* CassandraBlobStore is efficient for small blobs and offers low latency.
+However it is known to be expensive for big blobs.
+Cassandra storage is expensive.
+* Object Storage blob store is good at storing big blobs, but it induces higher latencies than Cassandra for small blobs for a cost gain that isn't worth it.
+
+Thus, significant performance and cost ratio refinement could be unlocked by using the right blob store for the right blob.
+
+== Decision
+
+Introduce StoragePolicies at the level of the BlobStore API.
+
+The proposed policies include:
+
+* SizeBasedStoragePolicy: The blob underlying storage medium will be chosen depending on its size.
+* LowCostStoragePolicy: The blob is expected to be saved in low cost storage.
+Access is expected to be unfrequent.
+* PerformantStoragePolicy: The blob is expected to be saved in performant storage.
+Access is expected to be frequent.
+
+An HybridBlobStore will replace current UnionBlobStore and will allow to choose between Cassandra and ObjectStorage implementations depending on the policies.
+
+DeletedMessageVault, BlobExport & MailRepository will rely on LowCostStoragePolicy.
+Other BlobStore users will rely on SizeBasedStoragePolicy.
+
+Some performance tests will be run in order to evaluate the improvements.
+
+== Consequences
+
+We expect small frequently accessed blobs to be located in Cassandra, allowing ObjectStorage to be used mainly for large costly blobs.
+
+In case of a less than 5% improvement, the code will not be added to the codebase and the proposal will get the status 'rejected'.
+
+We expect more data to be stored in Cassandra.
+We need to quantify this for adoption.
+
+As reads will be reading the two blobStores, no migration is required to use this composite blobstore on top an existing implementation, however we will benefits of the performance enhancements only for newly stored blobs.
+
+== References
+
+* https://issues.apache.org/jira/browse/JAMES-2921[JIRA]
diff --git a/docs/modules/development/pages/adr/0015-objectstorage-blobid-list.md.adoc b/docs/modules/development/pages/adr/0015-objectstorage-blobid-list.md.adoc
new file mode 100644
index 0000000..7c94209
--- /dev/null
+++ b/docs/modules/development/pages/adr/0015-objectstorage-blobid-list.md.adoc
@@ -0,0 +1,68 @@
+= 15. Persist BlobIds for avoiding persisting several time the same blobs within ObjectStorage
+
+Date: 2019-10-09
+
+== Status
+
+Proposed
+
+Adoption needs to be backed by some performance tests.
+
+== Context
+
+A given mail is often written to the blob store by different components.
+And mail traffic is heavily duplicated (several recipients receiving similar email, same attachments).
+This causes a given blob to often be persisted several times.
+
+Cassandra was the first implementation of the blobStore.
+Cassandra is a heavily write optimized NoSQL database.
+One can assume writes to be fast on top of Cassandra.
+Thus we assumed we could always overwrite blobs.
+
+This usage pattern was also adopted for BlobStore on top of ObjectStorage.
+
+However writing in Object storage:
+
+* Takes time
+* Is billed by most cloud providers
+
+Thus choosing a right strategy to avoid writing blob twice is desirable.
+
+However, ObjectStorage (OpenStack Swift) `exist` method was not efficient enough to be a real cost and performance saver.
+
+== Decision
+
+Rely on a StoredBlobIdsList API to know which blob is persisted or not in object storage.
+Provide a Cassandra implementation of it.
+Located in blob-api for convenience, this it not a top level API.
+It is intended to be used by some blobStore implementations (here only ObjectStorage).
+We will provide a CassandraStoredBlobIdsList in blob-cassandra project so that guice products combining object storage and Cassandra can define a binding to it.
+
+* When saving a blob with precomputed blobId, we can check the existence of the blob in storage, avoiding possibly the expensive "save".
+* When saving a blob too big to precompute its blobId, once the blob had been streamed using a temporary random blobId, copy operation can be avoided and the temporary blob could be directly removed.
+
+Cassandra is probably faster doing "write every time" rather than "read before write" so we should not use the stored blob projection for it
+
+Some performance tests will be run in order to evaluate the improvements.
+
+== Consequences
+
+We expect to reduce the amount of writes to the object storage.
+This is expected to improve:
+
+* operational costs on cloud providers
+* performance improvement
+* latency reduction under load
+
+As id persistence in StoredBlobIdsList will be done once the blob successfully saved, inconsistencies in StoredBlobIdsList will lead to duplicated saved blobs, which is the current behaviour.
+
+In case of a less than 5% improvement, the code will not be added to the codebase and the proposal will get the status 'rejected'.
+
+== Reference
+
+Previous optimization proposal using blob existence checks before persist.
+This work was done using ObjectStorage exist method and was prooven not efficient enough.
+
+https://github.com/linagora/james-project/pull/2011 (V2)
+
+* https://issues.apache.org/jira/browse/JAMES-2921[JIRA]
diff --git a/docs/modules/development/pages/adr/0016-distributed-workqueue.md.adoc b/docs/modules/development/pages/adr/0016-distributed-workqueue.md.adoc
new file mode 100644
index 0000000..7f6133f
--- /dev/null
+++ b/docs/modules/development/pages/adr/0016-distributed-workqueue.md.adoc
@@ -0,0 +1,29 @@
+= 16. Distributed WorkQueue
+
+Date: 2019-12-03
+
+== Status
+
+Accepted (lazy consensus)
+
+Supercedes xref:0003-distributed-workqueue.adoc[3.
+Distributed WorkQueue]
+
+== Context
+
+By switching the task manager to a distributed implementation, we need to be able to run a `Task` on any node of the cluster.
+
+== Decision
+
+For the time being we will keep the sequential execution property of the task manager.
+This is an intermediate milestone toward the final implementation which will drop this property.
+
+* Use a RabbitMQ queue as a workqueue where only the `Created` events are pushed into.
+Instead of using the brittle exclusive queue mechanism described in xref:0003-distributed-workqueue.adoc[3.
+Distributed WorkQueue], we will now use the natively supported https://www.rabbitmq.com/consumers.html#single-active-consumer[Single Active Consumer] mechanism.
+
+== Consequences
+
+* This solution is safer to use in production: if the active consumer dies, an other one is promoted instead.
+* This change needs RabbitMQ version to be at least 3.8.0.
+* The serial execution of tasks still does not leverage cluster scalability.
diff --git a/docs/modules/development/pages/adr/0017-file-mail-queue-deprecation.md.adoc b/docs/modules/development/pages/adr/0017-file-mail-queue-deprecation.md.adoc
new file mode 100644
index 0000000..ac38a93
--- /dev/null
+++ b/docs/modules/development/pages/adr/0017-file-mail-queue-deprecation.md.adoc
@@ -0,0 +1,43 @@
+= 17. FileMailQueue deprecation
+
+Date: 2019-12-04
+
+== Status
+
+Proposed
+
+== Context
+
+James offers several implementation for MailQueue, a component allowing asynchronous mail processing upon smtp mail  reception.
+These includes:
+
+* Default embedded ActiveMQ mail queue implementation, leveraging the JMS APIs and using the filesystem.
+* RabbitMQMailQueue allowing several James instances to share their MailQueue content.
+* And FileMailQueue directly leveraging the file system.
+
+We introduced a junit5 test contract regarding management features, concurrency issues, and FileMailQueue do not meet this  contract.
+This results in some tests being disabled and in an unstable test suite.
+
+FileMailQueue tries to implement a message queue within James code, which does not really makes sense as some other projects already provides one.
+
+== Decision
+
+Deprecate FileMailQueue components.
+
+Disable FileMailQueue tests.
+
+Target a removal as part of 3.6.0.
+
+== Consequences
+
+FileMailQueue is not exposed to the end user, be it over Spring or Guice, the impact of this deprecation + removal should be limited.
+
+We also expect our test suite to be more stable.
+
+== Reference
+
+Issues listing FileMailQueue defects:
+
+* https://issues.apache.org/jira/browse/JAMES-2298 Unsupported remove management feature
+* https://issues.apache.org/jira/browse/JAMES-2954 Incomplete browse implementation + Mixing concurrent operation might lead to a deadlock and missing fields
+* https://issues.apache.org/jira/browse/JAMES-2979 dequeue is not thread safe
diff --git a/docs/modules/development/pages/adr/0018-jmap-new-specs.md.adoc b/docs/modules/development/pages/adr/0018-jmap-new-specs.md.adoc
new file mode 100644
index 0000000..0fd12b8
--- /dev/null
+++ b/docs/modules/development/pages/adr/0018-jmap-new-specs.md.adoc
@@ -0,0 +1,65 @@
+= 18. New JMAP specifications adoption
+
+Date: 2020-02-06
+
+== Status
+
+Proposed
+
+== Context
+
+Historically, James has been an early adopter for the JMAP specification, and a first partial implementation was conducted when JMAP was just a draft.
+But with time, the IETF draft went with radical changes and the community could not keep this implementation up to date with the spec changes.
+
+As of summer 2019, JMAP core (https://tools.ietf.org/html/rfc8620[RFC 8620]) and JMAP mail (https://tools.ietf.org/html/rfc8621[RFC 8621]) have been officially published.
+Thus we should implement these new specifications to claim JMAP support.
+
+We need to keep in mind though that part of the community actively relies on the actual 'draft' implementation of JMAP existing in James.
+
+== Decision
+
+We decided to do as follow:
+
+* Rename packages `server/protocols/jmap*` and guice packages `server/container/guice/protocols/jmap*` to `jmap-draft`.
+`JMAPServer` should also be renamed to `JMAPDraftServer` (this has already been contributed https://github.com/apache/james-project/pull/164[here], thanks to @cketti).
+* Port `jmap-draft` to be served with a reactive technology
+* Implement a JMAP meta project to select the JMAP version specified in the accept header and map it to the correct implementation
+* Create a new `jmap` package
+* Implement the new JMAP request structure with the https://jmap.io/spec-core.html#the-coreecho-method[echo] method
+* Implement authentication and session of the new JMAP protocol
+* Implement protocol-level error handling
+* Duplicate and adapt existing mailbox methods of `jmap-draft` to `jmap`
+* Duplicate and adapt existing email methods of `jmap-draft` to `jmap`
+* Duplicate and adapt existing vacation methods of `jmap-draft` to `jmap`
+* Support uploads/downloads
+
+Then when we finish to port our existing methods to the new JMAP specifications, we can implement these new features:
+
+* Accounts
+* Identities
+* EmailSubmission
+* Push and queryChanges
+* Threads
+
+We decided to support `jmap` on top of memory-guice and distributed-james products for now.
+
+We should ensure no changes is done to `jmap-draft` while implementing the new `jmap` one.
+
+Regarding the versioning in the accept headers:
+
+* `Accept: application/json;jmapVersion=draft` would redirect to `jmap-draft`
+* `Accept: application/json;jmapVersion=rfc-8620` would redirect to `jmap`
+* When the `jmapVersion` is omitted, we will redirect first towards `jmap-draft`, then to `jmap` when `jmap-draft` becomes deprecated
+
+It's worth mentioning as well that we took the decision of writing this new implementation using `Scala`.
+
+== Consequences
+
+* Each feature implemented will respect the final specifications of JMAP
+* Getting missing features that are necessary to deliver a better mailing experience with James, like push, query changes and threads
+* Separating the current implementation from the new one will allow existing `jmap-draft` clients to smoothly transition to `jmap`, then trigger the classic "deprecation-then-removal" process.
+
+== References
+
+* A discussion around this already happened in September 2019 on the server-dev mailinglist: https://www.mail-archive.com/server-dev@james.apache.org/msg62072.html[JMAP protocol: Implementing RFC-8620 & RFC-8621]
+* JIRA: https://issues.apache.org/jira/browse/JAMES-2884[JAMES-2884]
diff --git a/docs/modules/development/pages/adr/0019-reactor-netty-adoption.md.adoc b/docs/modules/development/pages/adr/0019-reactor-netty-adoption.md.adoc
new file mode 100644
index 0000000..4b48e14
--- /dev/null
+++ b/docs/modules/development/pages/adr/0019-reactor-netty-adoption.md.adoc
@@ -0,0 +1,40 @@
+= 19. Reactor-netty adoption for JMAP server implementation
+
+Date: 2020-02-28
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+After adopting the last specifications of JMAP (see  https://github.com/apache/james-project/blob/master/src/adr/0018-jmap-new-specs.md[new JMAP specifications adoption ADR]),  it was agreed that we need to be able to serve both `jmap-draft` and the new `jmap` with a reactive server.
+
+The current outdated implementation of JMAP in James is currently using a non-reactive https://www.eclipse.org/jetty/[Jetty server].
+
+There are many possible candidates as reactive servers.
+Among the most popular ones for Java:
+
+* https://spring.io[Spring]
+* https://github.com/reactor/reactor-netty[Reactor-netty]
+* https://doc.akka.io/docs/akka-http/current/introduction.html[Akka HTTP]
+* ...
+
+== Decision
+
+We decide to use `reactor-netty` for the following reasons:
+
+* It's a reactive server
+* It's using https://projectreactor.io/[Reactor], which is the same technology that we use in the rest of our codebase
+* Implementing JMAP does not require high level HTTP server features
+
+== Consequences
+
+* Porting current `jmap-draft` to use a `reactor-netty` server instead of a Jetty server
+* The `reactor-netty` server should serve as well the new `jmap` implementation
+* We will be able to refactor and get end-to-end reactive operations for JMAP, unlocking performance gains
+
+== References
+
+* JIRA: https://issues.apache.org/jira/browse/JAMES-3078[JAMES-3078]
+* JMAP new specifications adoption ADR: https://github.com/apache/james-project/blob/master/src/adr/0018-jmap-new-specs.md
diff --git a/docs/modules/development/pages/adr/0020-cassandra-mailbox-object-consistency.md.adoc b/docs/modules/development/pages/adr/0020-cassandra-mailbox-object-consistency.md.adoc
new file mode 100644
index 0000000..e637748
--- /dev/null
+++ b/docs/modules/development/pages/adr/0020-cassandra-mailbox-object-consistency.md.adoc
@@ -0,0 +1,73 @@
+= 20. Cassandra Mailbox object consistency
+
+Date: 2020-02-27
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Mailboxes are denormalized in Cassandra in order to access them both by their immutable identifier and their mailbox  path (name):
+
+* `mailbox` table stores mailboxes by their immutable identifier
+* `mailboxPathV2` table stores mailboxes by their mailbox path
+
+We furthermore maintain two invariants on top of these tables:
+
+* *mailboxPath* unicity.
+Each mailbox path can be used maximum once.
+This is ensured by writing the mailbox path first  using Lightweight Transactions.
+* *mailboxId* unicity.
+Each mailbox identifier is used by only a single path.
+We have no real way to ensure a given mailbox  is not referenced by two paths.
+
+Failures during the denormalization process will lead to inconsistencies between the two tables.
+
+This can lead to the following user experience:
+
+----
+BOB creates mailbox A
+Denormalization fails and an error is returned to A
+
+BOB retries mailbox A creation
+BOB is being told mailbox A already exist
+
+BOB tries to access mailbox A
+BOB is being told mailbox A does not exist
+----
+
+== Decision
+
+We should provide an offline (meaning absence of user traffic via for exemple SMTP, IMAP or JMAP) webadmin task to  solve mailbox object inconsistencies.
+
+This task will read `mailbox` table and adapt path registrations in `mailboxPathV2`:
+
+* Missing registrations will be added
+* Orphan registrations will be removed
+* Mismatch in content between the two tables will require merging the two mailboxes together.
+
+== Consequences
+
+As an administrator, if some of my users reports the bugs mentioned above, I have a way to sanitize my Cassandra  mailbox database.
+
+However, due to the two invariants mentioned above, we can not identify a clear source of trust based on existing  tables for the mailbox object.
+The task previously mentioned is subject to concurrency issues that might cancel  legitimate concurrent user actions.
+
+Hence this task must be run offline (meaning absence of user traffic via for exemple SMTP, IMAP or JMAP).
+This can be achieved via reconfiguration (disabling the given protocols and restarting James) or via firewall rules.
+
+Due to all of those risks, a Confirmation header `I-KNOW-WHAT-I-M-DOING` should be positioned to  `ALL-SERVICES-ARE-OFFLINE` in order to prevent accidental calls.
+
+In the future, we should revisit the mailbox object data-model and restructure it, to identify a source of truth to  base the inconsistency fixing task on.
+Event sourcing is a good candidate for this.
+
+== References
+
+* https://issues.apache.org/jira/browse/JAMES-3058[JAMES-3058 Webadmin task to solve Cassandra Mailbox inconsistencies]
+* https://github.com/linagora/james-project/pull/3110[Pull Request: mailbox-cassandra utility to solve Mailbox inconsistency]
+* https://github.com/linagora/james-project/pull/3130[Pull Request: JAMES-3058 Concurrency testing for fixing Cassandra mailbox inconsistencies]
+
+This https://github.com/linagora/james-project/pull/3130#discussion_r383349596[thread] provides significant discussions leading to this Architecture Decision Record
+
+* https://www.mail-archive.com/server-dev@james.apache.org/msg64432.html[Discussion on the mailing list]
diff --git a/docs/modules/development/pages/adr/0021-cassandra-acl-inconsistency.md.adoc b/docs/modules/development/pages/adr/0021-cassandra-acl-inconsistency.md.adoc
new file mode 100644
index 0000000..3411f73
--- /dev/null
+++ b/docs/modules/development/pages/adr/0021-cassandra-acl-inconsistency.md.adoc
@@ -0,0 +1,63 @@
+= 21. Cassandra ACL inconsistencies
+
+Date: 2020-02-27
+
+== Status
+
+Proposed
+
+== Context
+
+Mailboxes ACLs are denormalized in Cassandra in order to:
+
+* given a mailbox, list its ACL (enforcing rights for example)
+* discover which mailboxes are delegated to a given user (used to list mailboxes)
+
+Here is the tables organisation:
+
+* `acl` stores the ACLs of a given mailbox
+* `UserMailboxACL` stores which mailboxes had been delegated to which user
+
+Failures during the denormalization process will lead to inconsistencies between the two tables.
+
+This can lead to the following user experience:
+
+----
+ALICE delegates her INBOX mailbox to BOB
+The denormalisation process fails
+ALICE INBOX does not appear in BOB mailbox list
+
+Given a delegated mailbox INBOX.delegated
+ALICE undo the sharing of her INBOX.delegated mailbox
+The denormalisation process fails
+ALICE INBOX.delegated mailbox still appears in BOB mailbox list
+When BOB tries to select it, he is being denied
+----
+
+== Decision
+
+We can adopt a retry policy of the `UserMailboxACL` projection update as a mitigation strategy.
+
+Using `acl` table as a source of truth, we can rebuild the `UserMailboxACL` projection:
+
+* Iterating `acl` entries, we can rewrite entries in `UserMailboxACL`
+* Iterating `UserMailboxACL` we can remove entries not referenced in `acl`
+* Adding a delay and a re-check before the actual fix can decrease the occurrence of concurrency issues
+
+We will expose a webAdmin task for doing this.
+
+== Consequences
+
+User actions concurrent to the inconsistency fixing task could result in concurrency issues.
+New inconsistencies could be created.
+However table of truth would not be impacted hence rerunning the inconsistency fixing task will eventually fix  all issues.
+
+This task could be run safely online and can be scheduled on a recurring basis outside of peak traffic by an admin to ensure Cassandra acl consistency.
+
+== References
+
+* https://github.com/linagora/james-project/pull/3125[Plan for fixing Cassandra ACL inconsistencies]
+* https://www.mail-archive.com/server-dev@james.apache.org/msg64432.html[General mailing list discussion about inconsistencies]
+* https://github.com/linagora/james-project/pull/3130[Pull Request: JAMES-3058 Concurrency testing for fixing Cassandra mailbox inconsistencies]
+
+The delay strategy to decrease concurrency issue occurrence is described here.
diff --git a/docs/modules/development/pages/adr/0022-cassandra-message-inconsistency.md.adoc b/docs/modules/development/pages/adr/0022-cassandra-message-inconsistency.md.adoc
new file mode 100644
index 0000000..f30cff8
--- /dev/null
+++ b/docs/modules/development/pages/adr/0022-cassandra-message-inconsistency.md.adoc
@@ -0,0 +1,89 @@
+= 22. Cassandra Message inconsistencies
+
+Date: 2020-02-27
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Messages are denormalized in Cassandra in order to:
+
+* access them by their unique identifier (messageId), for example through the JMAP protocol
+* access them by their mailbox identifier and Unique IDentifier within that mailbox (mailboxId + uid), for example   through the IMAP protocol
+
+Here is the table organisation:
+
+* `messageIdTable` Holds mailbox and flags for each message, lookup by mailbox ID + UID
+* `imapUidTable` Holds mailbox and flags for each message, lookup by message ID
+
+Failures during the denormalization process will lead to inconsistencies between the two tables.
+
+This can lead to the following user experience:
+
+----
+BOB receives a message
+The denormalization process fails
+BOB can read the message via JMAP
+BOB cannot read the message via IMAP
+
+BOB marks a message as SEEN
+The denormalization process fails
+The message is SEEN in JMAP
+The message is UNSEEN in IMAP
+----
+
+=== Current operations
+
+* Adding a message:
+ ** (CassandraMessageMapper) First reference the message in `messageIdTable` then in `imapUidTable`.
+ ** (CassandraMessageIdMapper) First reference the message in `imapUidTable` then in `messageIdTable`.
+* Deleting a message:
+ ** (CassandraMessageMapper) First delete the message in `imapUidTable` then in `messageIdTable`.
+ ** (CassandraMessageIdMapper) Read the message metadata using `imapUidTable`, then first delete the message in  `imapUidTable` then in `messageIdTable`.
+* Copying a message:
+ ** (CassandraMessageMapper) Read the message first, then first reference the message in `messageIdTable` then  in `imapUidTable`.
+* Moving a message:
+ ** (CassandraMessageMapper) Logically copy then delete.
+A failure in the chain migh lead to duplicated message (present  in both source and destination mailbox) as well as different view in IMAP/JMAP.
+ ** (CassandraMessageIdMapper) First reference the message in `imapUidTable` then in `messageIdTable`.
+* Updating a message flags:
+ ** (CassandraMessageMapper) First update conditionally the message in `imapUidTable` then in `messageIdTable`.
+ ** (CassandraMessageIdMapper) First update conditionally the message in `imapUidTable` then in `messageIdTable`.
+
+== Decision
+
+Adopt `imapUidTable` as a source of truth.
+Because `messageId` allows tracking changes to messages accross mailboxes  upon copy and moves.
+Furthermore, that is the table on which conditional flags updates are performed.
+
+All writes will be performed to `imapUidTable` then performed on `messageIdTable` if successful.
+
+We thus need to modify CassandraMessageMapper 'add' + 'copy' to first write to the source of truth (`imapUidTable`)
+
+We can adopt a retry policy of the `messageIdTable` projection update as a mitigation strategy.
+
+Using `imapUidTable` table as a source of truth, we can rebuild the `messageIdTable` projection:
+
+* Iterating `imapUidTable` entries, we can rewrite entries in `messageIdTable`
+* Iterating `messageIdTable` we can remove entries not referenced in `imapUidTable`
+* Adding a delay and a re-check before the actual fix can decrease the occurrence of concurrency issues
+
+We will expose a webAdmin task for doing this.
+
+== Consequences
+
+User actions concurrent to the inconsistency fixing task could result in concurrency issues.
+New inconsistencies could be created.
+However table of truth would not be impacted hence rerunning the inconsistency fixing task will eventually fix  all issues.
+
+This task could be run safely online and can be scheduled on a recurring basis outside of peak traffic by an admin to ensure Cassandra message consistency.
+
+== References
+
+* https://github.com/linagora/james-project/pull/3125[Plan for fixing Cassandra ACL inconsistencies]
+* https://www.mail-archive.com/server-dev@james.apache.org/msg64432.html[General mailing list discussion about inconsistencies]
+* https://github.com/linagora/james-project/pull/3130[Pull Request: JAMES-3058 Concurrency testing for fixing Cassandra mailbox inconsistencies]
+
+The delay strategy to decrease concurrency issue occurrence is described here.
diff --git a/docs/modules/development/pages/adr/0023-cassandra-mailbox-counters-inconsistencies.md.adoc b/docs/modules/development/pages/adr/0023-cassandra-mailbox-counters-inconsistencies.md.adoc
new file mode 100644
index 0000000..faff600
--- /dev/null
+++ b/docs/modules/development/pages/adr/0023-cassandra-mailbox-counters-inconsistencies.md.adoc
@@ -0,0 +1,58 @@
+= 23. Cassandra Mailbox Counters inconsistencies
+
+Date: 2020-03-07
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Cassandra maintains a per mailbox projection for message count and unseen message count.
+
+As with any projection, it can go out of sync, leading to inconsistent results being returned to the client, which is not acceptable.
+
+Here is the table organisation:
+
+* `mailbox` Lists the mailboxes
+* `messageIdTable` Holds mailbox and flags for each message, lookup by mailbox ID + UID
+* `imapUidTable` Holds mailbox and flags for each message, lookup by message ID and serves as a source of truth
+* `mailboxCounters` Holds messages count and unseen message count for each mailbox.
+
+Failures during the denormalization process will lead to inconsistencies between the counts and the content of `imapUidTable`
+
+This can lead to the following user experience:
+
+* Invalid message count can be reported in the Mail User Agent (IMAP & JMAP)
+* Invalid message unseen count can be reported in the Mail User Agent (IMAP & JMAP)
+
+== Decision
+
+Implement a webadmin exposed task to recompute mailbox counters.
+
+This endpoints will:
+
+* List existing mailboxes
+* List their messages using `messageIdTable`
+* Check them against their source of truth `imapUidTable`
+* Compute mailbox counter values
+* And reset the value of the counter if needed in `mailboxCounters`
+
+== Consequences
+
+This endpoint is subject to data races in the face of concurrent operations.
+Concurrent increments & decrements will be  ignored during a single mailbox processing.
+However the source of truth is unaffected hence, upon rerunning the task,  the result will be eventually correct.
+To be noted that Cassandra counters can't be reset in an atomic manner anyway.
+
+We rely on the "listing messages by mailbox" projection (that we recheck).
+Missing entries in there will be ignored until the given projection is healed (currently unsupported).
+
+We furthermore can piggy back a partial check of the message denormalization described in  xref:0021-cassandra-acl-inconsistency.adoc[this ADR] upon counter recomputation (partial because  we cannot detect missing entries in the "list messages in mailbox" denormalization table)
+
+== References
+
+* https://github.com/linagora/james-project/pull/3125[Plan for fixing Cassandra ACL inconsistencies]
+* https://www.mail-archive.com/server-dev@james.apache.org/msg64432.html[General mailing list discussion about inconsistencies]
+* https://issues.apache.org/jira/browse/JAMES-3105[JAMES-3105 Related JIRA]
+* https://github.com/linagora/james-project/pull/3185[Pull Request: JAMES-3105 Corrective task for fixing mailbox counters]
diff --git a/docs/modules/development/pages/adr/0024-polyglot-strategy.md.adoc b/docs/modules/development/pages/adr/0024-polyglot-strategy.md.adoc
new file mode 100644
index 0000000..f00ed6f
--- /dev/null
+++ b/docs/modules/development/pages/adr/0024-polyglot-strategy.md.adoc
@@ -0,0 +1,179 @@
+= 24. Polyglot codebase
+
+Date: 2020-03-17
+
+== Status
+
+Proposed
+
+== Context & Problem Statement
+
+James is written in Java for a very long time.
+In recent years, Java modernized a lot after a decade of slow progress.
+
+However, in the meantime, most software relying on the JVM started supporting alternative JVM languages to keep being relevant.
+
+It includes Groovy, Clojure, Scala and more recently Kotlin, to name a few.
+
+Not being open to those alternative languages can be a problem for James adoption.
+
+== Decision drivers
+
+Nowadays, libraries and framework targeting the JVM are expected to support usage of one or several of these alternative languages.
+
+James being not only a mail server but also a development framework needs to reach those expectations.
+
+At the same time, more and more developers and languages adopt Function Programming (FP) idioms to solve their problems.
+
+== Considered options
+
+=== Strategies
+
+. Let the users figure out how to make polyglot setups
+. Document the usage of polyglot mailets for some popular languages
+. Document the usage of polyglot components for some popular languages
+. Actually implement some mailets in some popular languages
+. Actually implement some components in some popular languages
+
+=== Languages:
+
+[upperroman] I.
+Clojure II.
+Groovy III.
+Kotlin IV.
+Scala
+
+== Decision
+
+We decide for options 4, 5 and IV.
+
+That means we need to write some mailets in Scala and demonstrate how it's done and then used in a running server.
+
+It also means writing and/or refactoring some server components in Scala, starting where it's the most relevant.
+
+### Positive Consequences
+
+* Modernize parts of James code
+* Leverage Scala richer FP ecosystem and language to overcome Java limitations on that topic
+* Should attract people that would not like Java
+
+### Negative Consequences
+
+* Adds even more knowledge requirements to contribute to James
+* Scala build time is longer than Java build time
+
+== Pros and Cons of the Options
+
+=== Option 1: Let the users figure out how to make polyglot setups
+
+Pros:
+
+* We don't have anything new to do
+
+Cons:
+
+* It's boring, we like new challenges
+* Java is declining despite language modernization, it means in the long term, less and less people will contribute to James
+
+=== Option 2: Document the usage of polyglot mailets for some popular languages
+
+Pros:
+
+* It's not a lot of work and yet it opens James to alternatives and can attract people outside Java developers community
+
+Cons:
+
+* Documentation without implementation often gets outdated when things move forward
+* We don't really gain knowledge on the polyglot matters as a community and won't be able to help users much
+
+=== Option 3: Document the usage of polyglot components for some popular languages
+
+Pros:
+
+* It opens James to alternatives and can attract people outside Java developers community
+
+Cons:
+
+* Documentation without implementation often gets outdated when things move forward
+* For such complex subject it's probably harder to do than actually implementing a component in another language
+* We don't really gain knowledge on the polyglot matters as a community and won't be able to help users much
+
+### Option 4: Actually implement some mailets in some popular languages
+
+Pros:
+
+* It's probably not a lot of work, a mailet is just a simple class, probably easy to do in most JVM language
+* It makes us learn how it works and maybe will help us go further than the basic polyglot experience by doing small enhancements to the codebase
+* We can document the process and illustrate with some actual code
+* It opens James to alternatives and can attract people outside Java developers community
+
+Cons:
+
+* It can have a negative impact on the build time and dependencies download
+
+### Option 5: Actually implement some components in some popular languages
+
+Pros:
+
+* Leverage a modern language for some complex components
+* It makes us learn how it works and maybe will help us go further than the basic polyglot experience by doing small enhancements to the codebase
+* We can document the process and illustrate with some actual code
+* It opens James to alternatives and can attract people outside Java developers community
+
+Cons:
+
+* It makes the codebase more complex, requiring knowledge in another language
+* It can have a negative impact on the build time and dependencies download
+
+=== Option I: Clojure
+
+Pros:
+
+* Functional Language
+
+Cons:
+
+* Weak popularity
+* No prior experience among current active commiters
+* Not statically typed hence less likely to fit the size of the project
+
+=== Option II: Groovy
+
+Pros:
+
+* More advanced than current Java on most topics
+
+Cons:
+
+* No prior experience among current active commiters
+* Not very FP
+* Replaced in JVM community by Kotlin last years
+
+### Option III.
+Kotlin
+
+Pros:
+
+* Great Intellij support
+* Most of the good parts of Scala
+* FP-ish with Arrow
+* Coroutine for handling high-performance IOs
+
+Cons:
+
+* No prior experience among current active commiters
+* Lack of some FP constructs like proper Pattern Matching, persistent collections
+* Despite progress done by Arrow, Kotlin community aims mostly at writing "better Java"
+
+==== Option IV. Scala
+
+Pros:
+
+* Rich FP community and ecosystem
+* Existing knowledge among current active commiters
+
+Cons:
+
+* Needs work to master
+* Can be slow to build
+* 3.0 will probably require code changes
diff --git a/docs/modules/development/pages/adr/0025-cassandra-blob-store-cache.md.adoc b/docs/modules/development/pages/adr/0025-cassandra-blob-store-cache.md.adoc
new file mode 100644
index 0000000..ef8317c
--- /dev/null
+++ b/docs/modules/development/pages/adr/0025-cassandra-blob-store-cache.md.adoc
@@ -0,0 +1,69 @@
+= 25. Cassandra Blob Store Cache
+
+Date: 2020-04-03
+
+== Status
+
+Proposed
+
+Supercedes xref:0014-blobstore-storage-policies.adoc[14.
+Add storage policies for BlobStore]
+
+== Context
+
+James exposes a simple BlobStore API for storing raw data.
+However such raw data often vary in size and access patterns.
+
+As an example:
+
+* Mailbox message headers are expected to be small and frequently accessed
+* Mailbox message body are expected to have sizes ranging from small to big but are unfrequently accessed
+* DeletedMessageVault message headers are expected to be small and unfrequently accessed
+
+The access pattern of some of these kind of blobs does not fit Object Storage characteristics: good at storing big blobs, but  it induces high latencies for reading small blobs.
+We observe latencies of around 50-100ms while Cassandra latency is of 4ms.
+
+This gets some operations slow (for instance IMAP FETCH headers, or listing JMAP messages).
+
+== Decision
+
+Implement a write through cache to have better read latency for smaller objects.
+
+Such a cache needs to be distributed in order to be more efficient.
+
+Given that we don't want to introduce new technologies, we will implement it using Cassandra.
+
+The cache should be implemented as a key-value table on a dedicated 'cache' keyspace, with a replication factor of 1,  and be queried with a consistency level of ONE.
+
+We will leverage a configurable TTL as an eviction policy.
+Cache will be populated upon writes and missed read, if the  blob size is below a configurable threashold.
+We will use the TimeWindow compaction strategy.
+
+Failure to read the cache, or cache miss will result in a read in the object storage.
+
+== Consequences
+
+Metadata queries are expected not to query the object storage anymore.
+
+https://github.com/linagora/james-project/pull/3031#issuecomment-572865478[Performance tests] proved such strategies to be highly effective.
+We expect comparable performance improvements compared to an un-cached ObjectStorage blob store.
+
+HybridBlobStore should be removed.
+
+== Alternatives
+
+xref:0014-blobstore-storage-policies.adoc[14.
+Add storage policies for BlobStore] proposes to use the CassandraBlobStore to mimic a cache.
+
+This solution needed further work as we decided to add an option to write all blobs to the object storage in order:
+
+* To get a centralized source of truth
+* Being able to instantly rollback Hybrid blob store adoption
+
+See https://github.com/linagora/james-project/pull/3162[this pull request]
+
+With such a proposal there is no eviction policy.
+Also, the storage is done on the main keyspace with a high replication factor, and QUORUM consistency level (high cost).
+
+To be noted, as cached entries are small, we can assume they are small enough to fit in a single Cassandra row.
+This is more  optimized than the large blob handling through blobParts the CassandraBlobStore is doing.
diff --git a/docs/modules/development/pages/adr/0026-removing-configured-additional-mailboxListeners.md.adoc b/docs/modules/development/pages/adr/0026-removing-configured-additional-mailboxListeners.md.adoc
new file mode 100644
index 0000000..aa48fc0
--- /dev/null
+++ b/docs/modules/development/pages/adr/0026-removing-configured-additional-mailboxListeners.md.adoc
@@ -0,0 +1,72 @@
+= 26. Removing a configured additional MailboxListener
+
+Date: 2020-04-03
+
+== Status
+
+Accepted (lazy consensus)
+
+Superceded by xref:0035-distributed-listeners-configuration.adoc[34.
+Distributed Mailbox Listener Configuration]
+
+== Context
+
+James enables a user to register additional mailbox listeners.
+
+The distributed James server is handling mailbox event processing (mailboxListener execution) using a RabbitMQ work-queue per listener.
+
+The distributed James server then declares a queue upon start for each one of these user registered listeners, that it binds to the main event exchange.
+
+More information about this component, and its distributed, RabbitMQ based implementation, can be found in  xref:0037-eventbus.adoc[ADR 0036].
+
+If the user unconfigures the listener, the queue and the binding are still present but not consumed.
+This results in  unbounded queue growth eventually causing RabbitMQ resource exhaustion and failure.
+
+== Vocabulary
+
+A *required group* is a group configured within James additional mailbox listener or statically binded via Guice.
+We  should have a queue for that mailbox listener binded to the main exchange.
+
+A *registered group* is a group whose queue exists in RabbitMQ and is bound to the exchange, independently of its James  usage.
+If it is required, a consumer will consume the queue.
+Otherwise the queue might grow unbounded.
+
+== Decision
+
+We need a clear consensus and auditability across the James cluster about *required groups* (and their changes).
+Thus  Event sourcing will maintain an aggregate tracking *required groups* (and their changes).
+Audit will be enabled by  adding host and date information upon changes.
+A subscriber will perform changes (binds and unbinds) in registered groups  following the changes of the aggregate.
+
+Event sourcing is desirable as it allows:
+
+* Detecting previously removed MailboxListener upon start
+* Audit of unbind decisions
+* Enables writing more complex business rules in the future
+
+The event sourcing system will have the following command:
+
+* *RequireGroups* the groups that the *EventBus* is starting with.
+
+And the following events:
+
+* *RequiredGroupAdded* a group is added to the required groups.
+* *RequiredGroupRemoved* a group is removed from the required groups.
+
+Upon start the aggregate will be updated if needed and bindings will be adapted accordingly.
+
+Note that upon failure, registered groups will diverge from required groups.
+We will add a health check to diagnose  such issues.
+Eventually, we will expose a webadmin task to reset registered groups to required groups.
+
+The queues should not be deleted to prevent message loss.
+
+Given a James topology with a non uniform configuration, the effective RabbitMQ routing will be the one of the latest  started James server.
+
+== Alternatives
+
+We could also consider adding a webadmin endpoint to sanitize eventBus bindings, allowing more predictability than the above solution but it would require admin intervention.
+
+== References
+
+* https://github.com/linagora/james-project/pull/3280[Discussion] around the overall design proposed here.
diff --git a/docs/modules/development/pages/adr/0027-eventBus-error-handling-upon-dispatch.md.adoc b/docs/modules/development/pages/adr/0027-eventBus-error-handling-upon-dispatch.md.adoc
new file mode 100644
index 0000000..64b89f2
--- /dev/null
+++ b/docs/modules/development/pages/adr/0027-eventBus-error-handling-upon-dispatch.md.adoc
@@ -0,0 +1,35 @@
+= 27. EventBus error handling upon dispatch
+
+Date: 2020-04-03
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+James allows asynchronous processing for mailbox events via MailboxListener.
+This processing is abstracted by the  EventBus.
+
+If the processing of an event via a mailbox listener fails, it is retried, until it succeeds.
+If a maxRetries parameter  is exceeded, the event is stored in deadLetter and no further processing is attended.
+
+The administrator can then look at the content of deadLetter to diagnose processing issues and schedule a reDelivery in  order to retry their processing via webAdmin APIs.
+
+However no such capabilities are supported upon dispatching the event on the eventbus.
+A failed dispatch will result in message loss.
+
+More information about this component can be found in xref:0037-eventbus.adoc[ADR 0036].
+
+== Decision
+
+Upon dispatch failure, the eventBus should save events in dead letter using a dedicated group.
+
+Reprocessing this group an admin can re-trigger these events dispatch.
+
+In order to ensure auto healing, James will periodically check the corresponding group in deadLetter is empty.
+If not a re-dispatching of these events will be attempted.
+
+== Consequence
+
+In distributed James Guice project an administrator have a way to be eventually consistent upon rabbitMQ failure.
diff --git a/docs/modules/development/pages/adr/0028-Recompute-mailbox-quotas.md.adoc b/docs/modules/development/pages/adr/0028-Recompute-mailbox-quotas.md.adoc
new file mode 100644
index 0000000..75cdd6b
--- /dev/null
+++ b/docs/modules/development/pages/adr/0028-Recompute-mailbox-quotas.md.adoc
@@ -0,0 +1,46 @@
+= 28. Recompute mailbox quotas
+
+Date: 2020-04-03
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+JMAP custom quota extension, as well as IMAP https://tools.ietf.org/html/rfc2087[RFC-2087] enables a user to monitor the amount of space and message count he is allowed to use, and that he is effectively using.
+
+To track the quota values a user is effectively using, James relies on the  link:../site/markdown/server/manage-guice-distributed-james.md#mailbox-event-bus[eventBus] to increment a Cassandra counter corresponding to this user.
+
+However, upon Cassandra failure, this value can be incorrect, hence the need of correcting it.
+
+== Data model details
+
+Table: imapUidTable: Holds mailbox and flags for each message, lookup by message ID
+
+Table: messageV2: Holds message metadata, independently of any mailboxes.
+Content of messages is stored in `blobs`         and `blobparts` tables.
+
+Table: currentQuota: Holds per quota-root current values.
+Quota-roots defines groups of mailboxes which share quotas  limitations.
+
+Operation:
+
+* Quota updates are done asynchronously (event bus + listener) for successful mailbox operations.
+ ** If the quota update is not applied, then we are inconsistent
+ ** EventBus errors are retried upon errors, counters being non-indempotent, this can result in inconsistent quotas
+
+== Decision
+
+We will implement a generic corrective task exposed via webadmin.
+
+This task can reuse the `CurrentQuotaCalculator` and call it for each and every quotaRoot of each user.
+
+This way, non-Cassandra implementation will also benefit from this task.
+
+== Consequences
+
+This task is not concurrent-safe.
+Concurrent operations will result in an invalid quota to be persisted.
+
+However, as the source of truth is not altered, re-running this task will eventually return the correct result.
diff --git a/docs/modules/development/pages/adr/0029-Cassandra-mailbox-deletion-cleanup.md.adoc b/docs/modules/development/pages/adr/0029-Cassandra-mailbox-deletion-cleanup.md.adoc
new file mode 100644
index 0000000..e89ce54
--- /dev/null
+++ b/docs/modules/development/pages/adr/0029-Cassandra-mailbox-deletion-cleanup.md.adoc
@@ -0,0 +1,46 @@
+= 29. Cassandra mailbox deletion cleanup
+
+Date: 2020-04-12
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Cassandra is used within distributed James product to hold messages and mailboxes metadata.
+
+Cassandra holds the following tables:
+
+* mailboxPathV2 + mailbox allowing to retrieve mailboxes informations
+* acl + UserMailboxACL hold denormalized information
+* messageIdTable & imapUidTable allow to retrieve mailbox context information
+* messageV2 table holds message metadata
+* attachmentV2 holds attachments for messages
+* References to these attachments are contained within the attachmentOwner and attachmentMessageId tables
+
+Currently, the deletion only deletes the first level of metadata.
+Lower level metadata stay unreachable.
+The data looks  deleted but references are actually still present.
+
+Concretely:
+
+* Upon mailbox deletion, only mailboxPathV2 & mailbox content is deleted.
+messageIdTable, imapUidTable, messageV2,   attachmentV2 & attachmentMessageId metadata are left undeleted.
+* Upon mailbox deletion, acl + UserMailboxACL are not deleted.
+* Upon message deletion, only messageIdTable & imapUidTable content are deleted.
+messageV2, attachmentV2 &   attachmentMessageId metadata are left undeleted.
+
+This jeopardize efforts to regain disk space and privacy, for example through blobStore garbage collection.
+
+== Decision
+
+We need to cleanup Cassandra metadata.
+They can be retrieved from dandling metadata after the delete operation had been  conducted out.
+We need to delete the lower levels first so that upon failures undeleted metadata can still be reached.
+
+This cleanup is not needed for strict correctness from a MailboxManager point of view thus it could be carried out  asynchronously, via mailbox listeners so that it can be retried.
+
+== Consequences
+
+Mailbox listener failures lead to eventBus retrying their execution, we need to ensure the result of the deletion to be  idempotent.
diff --git a/docs/modules/development/pages/adr/0030-separate-attachment-content-and-metadata.md.adoc b/docs/modules/development/pages/adr/0030-separate-attachment-content-and-metadata.md.adoc
new file mode 100644
index 0000000..3bbdb89
--- /dev/null
+++ b/docs/modules/development/pages/adr/0030-separate-attachment-content-and-metadata.md.adoc
@@ -0,0 +1,94 @@
+= 30. Separate attachment content and metadata
+
+Date: 2020-04-13
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Some mailbox implementations of James store already parsed attachments for faster retrieval.
+
+This attachment storage capabilities are required for two features:
+
+* JMAP attachment download
+* JMAP message search "attachment content" criteria
+
+Only Memory and Cassandra backends can be relied upon as a JMAP backend.
+
+Other protocols relies on dynamic EML parsing to expose message subparts (IMAP)
+
+Here are the POJOs related to these attachments:
+
+* *Attachment* : holds an attachmentId, the attachment content, as well as the content type
+* *MessageAttachment* : composes an attachment with its disposition within a message (cid, inline and name)
+* *Message* exposes its list of MessageAttachment when it is read with FetchType Full.
+* *Blob* represents some downloadable content, and can be either an attachment or a message.
+Blob has a byte array   payload too.
+
+The following classes work with the aforementioned POJOs:
+
+* *AttachmentMapper* and *AttachmentManager* are responsible of storing and retrieving an attachment content.
+* *BlobManager* is used by JMAP to allow blob downloads.
+* Mailbox search exposes attachment content related criteria.
+These criteria are used by the JMAP protocol.
+
+This organisation causes attachment content to be loaded every time a message is fully read (which happens for instance when you open a message using JMAP) despite the fact that it is not needed, as attachments are downloadable through a  separate JMAP endpoint, their content is not attached to the JMAP message JSON.
+
+Also, the content being loaded "at once", we allocate memory space to store the whole attachment, which is sub-optimal.
+We want to keep the consumed memory low per-message because a given server should be able to handle a high number of messages  at a given time.
+
+To be noted that JPA and maildir mailbox implementations do not support attachment storage.
+To retrieve attachments of a  message, these implementations parse the messages to extract their attachments.
+
+Cassandra mailbox prior schema version 4 stored attachment and its metadata in the same table, but from version 5 relies  on the blobStore to store the attachment content.
+
+== Decision
+
+Enforce cassandra schema version to be 5 from James release 3.5.0.
+This allows to drop attachment management prior version 5.
+
+We will re-organize the attachment POJOs:
+
+* *Attachment* should hold an attachmentId, a content type, and a size.
+It will no longer hold the content.
+The   content can be loaded from its *AttachmentId* via the *AttachmentLoader* API that the *AttachmentManager*   implements.
+* *MessageAttachment* : composes an attachment with its disposition within a message (cid, inline and name)
+* *Blob* would no longer hold the content as a byte array but rather a content retriever (`Supplier<InputStream>`)
+* *ParsedAttachment* is the direct result of attachment parsing, and composes a *MessageAttachment* and the   corresponding content as byte array.
+This class is only relied upon when saving a message in mailbox.
+This is used as   an output of `MessageParser`.
+
+Some adjustments are needed on class working with attachment:
+
+* *AttachmentMapper* and *AttachmentManager* need to allow from an attachmentId to retrieve the attachment content  as an `InputStream`.
+This is done through a separate `AttachmentLoader` interface.
+* *AttachmentMapper* and *AttachmentManager* need the Attachment and its content to persist an attachment
+* *MessageManager* then needs to return attachment metadata as a result of Append operation.
+* *InMemoryAttachmentMapper* needs to store attachment content separately.
+* *MessageStorer* will take care of storing a message on the behalf of `MessageManager`.
+This enables to determine if   attachment should be parsed or not on an implementation aware fashion, saving attachment parsing upon writes for JPA   and Maildir.
+
+Maildir and JPA no longer support attachment content loading.
+Only the JMAP protocol requires attachment content loading, which is not supported on top of these technologies.
+
+Mailbox search attachment content criteria will be supported only on implementation supporting attachment storage.
+
+== Consequences
+
+Users running Cassandra schema version prior version 5 will have to go through James release 3.5.0 to upgrade to a  version after version 5 before proceeding with their update.
+
+We noticed performance enhancement when using IMAP FETCH and JMAP GetMessages.
+Running a gatling test suite exercising  JMAP getMessages on a dataset containing attachments leads to the following observations:
+
+* Overall better average performance for all JMAP queries (10% global p50 improvement)
+* Sharp decrease in tail latency of getMessages (x40 time faster)
+
+We also expect improvements in James memory allocation.
+
+== References
+
+* https://github.com/linagora/james-project/pull/3061[Contribution on this topic].
+Also contains benchmark for this   proposal.
+* https://issues.apache.org/jira/browse/JAMES-2997[JIRA]
diff --git a/docs/modules/development/pages/adr/0031-distributed-mail-queue.md.adoc b/docs/modules/development/pages/adr/0031-distributed-mail-queue.md.adoc
new file mode 100644
index 0000000..62050e9
--- /dev/null
+++ b/docs/modules/development/pages/adr/0031-distributed-mail-queue.md.adoc
@@ -0,0 +1,122 @@
+= 31. Distributed Mail Queue
+
+Date: 2020-04-13
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+MailQueue is a central component of SMTP infrastructure allowing asynchronous mail processing.
+This enables a short  SMTP reply time despite a potentially longer mail processing time.
+It also works as a buffer during SMTP peak workload to not overload a server.
+
+Furthermore, when used as a Mail Exchange server (MX), the ability to add delays to be observed before dequeing elements allows, among others:
+
+* Delaying retries upon MX delivery failure to a remote site.
+* Throttling, which could be helpful for not being considered a spammer.
+
+A mailqueue also enables advanced administration operations like traffic review, discarding emails, resetting wait  delays, purging the queue, etc.
+
+Spring implementation and non distributed implementations rely on an embedded ActiveMQ to implement the MailQueue.
+Emails are being stored in a local file system.
+An administrator wishing to administrate the mailQueue will thus need  to interact with all its James servers, which is not friendly in a distributed setup.
+
+Distributed James relies on the following third party softwares (among other):
+
+* *RabbitMQ* for messaging.
+Good at holding a queue, however some advanced administrative operations can't be  implemented with this component alone.
+This is the case for `browse`, `getSize` and `arbitrary mail removal`.
+* *Cassandra* is the metadata database.
+Due to *tombstone* being used for delete, queue is a well known anti-pattern.
+* *ObjectStorage* (Swift or S3) holds byte content.
+
+== Decision
+
+Distributed James should ship a distributed MailQueue composing the following softwares with the following  responsibilities:
+
+* *RabbitMQ* for messaging.
+A rabbitMQ consumer will trigger dequeue operations.
+* A time series projection of the queue content (order by time list of mail metadata) will be maintained in *Cassandra* (see later).
+Time series avoid the  aforementioned tombstone anti-pattern, and no polling is performed on this projection.
+* *ObjectStorage* (Swift or S3) holds large byte content.
+This avoids overwhelming other softwares which do not scale  as well in term of Input/Output operation per seconds.
+
+Here are details of the tables composing Cassandra MailQueue View data-model:
+
+* *enqueuedMailsV3* holds the time series.
+The primary key holds the queue name, the (rounded) time of enqueue  designed as a slice, and a bucketCount.
+Slicing enables listing a large amount of items from a given point in time, in an  fashion that is not achievable with a classic partition approach.
+The bucketCount enables sharding and avoids all writes  at a given point in time to go to the same Cassandra partition.
+The clustering key is composed of an enqueueId - a  unique identifier.
+The content holds the metadata of the email.
+This table enables, from a starting date, to load all of the emails that have ever been in the mailQueue.
+Its content is never deleted.
+* *deletedMailsV2* tells wether a mail stored in _enqueuedMailsV3_ had been deleted or not.
+The queueName and  enqueueId are used as primary key.
+This table is updated upon dequeue and deletes.
+This table is queried upon dequeue  to filter out deleted/purged items.
+* *browseStart* store the latest known point in time from which all previous emails had been deleted/dequeued.
+It  enables to skip most deleted items upon browsing/deleting queue content.
+Its update is probability based and  asynchronously piggy backed on dequeue.
+
+Here are the main mail operation sequences:
+
+* Upon *enqueue* mail content is stored in the _object storage_, an entry is added in _enqueuedMailsV3_ and a message   is fired on _rabbitMQ_.
+* *dequeue* is triggered by a rabbitMQ message to be received.
+_deletedMailsV2_ is queried to know if the message had already been deleted.
+If not, the mail content is retrieved from the _object storage_, then an entry is added in  _deletedMailsV2_ to notice the email had been dequeued.
+A dequeue has a random probability to trigger a browse start update.
+If so, from current browse start, _enqueuedMailsV3_ content is iterated, and checked against _deletedMailsV2_ until the first non deleted / dequeued email is found.
+This point becomes the new browse start.
+BrowseStart can never  point after the start of the current slice.
+A grace period upon browse start update is left to tolerate clock skew.
+Update of the browse start is done randomly as it is a simple way to avoid synchronisation in a distributed system: we ensure liveness while uneeded browseStart updates being triggered would simply waste a few resources.
+* Upon *browse*, _enqueuedMailsV3_ content is iterated, and checked against _deletedMailsV2_, starting from the  current browse start.
+* Upon *delete/purge*, _enqueuedMailsV3_ content is iterated, and checked against _deletedMailsV2_.
+Mails matching  the condition are marked as deleted in _enqueuedMailsV3_.
+* Upon *getSize*, we perform a browse and count the returned elements.
+
+The distributed mail queue requires a fine tuned configuration, which mostly depends of the count of Cassandra servers,  and of the mailQueue throughput:
+
+* *sliceWindow* is the time period of a slice.
+All the elements of *enqueuedMailsV3* sharing the same slice are  retrieved at once.
+The bigger, the more elements are going to be read at once, the less frequent browse start update  will be.
+Lower values might result in many almost empty slices to be read, generating higher read load.
+We recommend  *sliceWindow* to be chosen from users maximum throughput so that approximately 10.000 emails be contained in a slice.
+Only values dividing the current _sliceWindow_ are allowed as new values (otherwize previous slices might not be found).
+* *bucketCount* enables spreading the writes in your Cassandra cluster using a bucketting strategy.
+Low values will  lead to workload not to be spread evenly, higher values might result in uneeded reads upon browse.
+The count of Cassandra  servers should be a good starting value.
+Only increasing the count of buckets is supported as a configuration update as decreasing the bucket count might result in some buckets to be lost.
+* *updateBrowseStartPace* governs the probability of updating browseStart upon dequeue/deletes.
+We recommend choosing  a value guarantying a reasonable probability of updating the browse start every few slices.
+Too big values will lead to uneeded update of not yet finished slices.
+Too low values will end up in a more expensive browseStart update and browse iterating through slices with all their content deleted.
+This value can be changed freely.
+
+We rely on eventSourcing to validate the mailQueue configuration changes upon James start following the aforementioned rules.
+
+== Limitations
+
+Delays are not supported.
+This mail queue implementation is thus not suited for a Mail Exchange (MX) implementation.
+The https://issues.apache.org/jira/browse/JAMES-2896[following proposal] could be a solution to support delays.
+
+*enqueuedMailsV3* and *deletedMailsV2* is never cleaned up and the corresponding blobs are always referenced.
+This is not ideal both from a privacy and space storage costs point of view.
+
+*getSize* operation is sub-optimal and thus not efficient.
+Combined with metric reporting of mail queue size being  periodically performed by all James servers this can lead, upon increasing throughput to a Cassandra overload.
+A configuration parameter allows to disable mail queue size reporting as a temporary solution.
+Some alternatives had been presented like  https://github.com/linagora/james-project/pull/2565[an eventually consistent per slice counters approach].
+An other  proposed solution is https://github.com/linagora/james-project/pull/2325[to rely on RabbitMQ management API to retrieve mail queue size] however by design it cannot take into account purge/delete operations.
+Read  https://issues.apache.org/jira/browse/JAMES-2733[the corresponding JIRA].
+
+== Consequences
+
+Distributed mail queue allows a better spreading of Mail processing workload.
+It enables a centralized mailQueue management for all James servers.
+
+Yet some additional work is required to use it as a Mail Exchange scenario.
diff --git a/docs/modules/development/pages/adr/0032-distributed-mail-queue-cleanup.md.adoc b/docs/modules/development/pages/adr/0032-distributed-mail-queue-cleanup.md.adoc
new file mode 100644
index 0000000..cc8e611
--- /dev/null
+++ b/docs/modules/development/pages/adr/0032-distributed-mail-queue-cleanup.md.adoc
@@ -0,0 +1,50 @@
+= 32. Distributed Mail Queue Cleanup
+
+Date: 2020-04-13
+
+== Status
+
+Proposed
+
+== Context
+
+Read xref:0031-distributed-mail-queue.adoc[Distributed Mail Queue] for full context.
+
+*enqueuedMailsV3* and *deletedMailsV2* is never cleaned up and the corresponding blobs are always referenced.
+This is not ideal both from a privacy and space storage costs point of view.
+
+Note that *enqueuedMailsV3* and *deletedMailsV2* rely on timeWindowCompactionStrategy.
+
+== Decision
+
+Add a new `contentStart` table referencing the point in time from which a given mailQueue holds data, for each mail queue.
+
+The values contained between `contentStart` and `browseStart` can safely be deleted.
+
+We can perform this cleanup upon `browseStartUpdate`: once finished we can browse then delete content of *enqueuedMailsV3* and *deletedMailsV2* contained between `contentStart` and the new `browseStart` then we can safely set `contentStart`  to the new `browseStart`.
+
+Content before `browseStart` can safely be considered deletable, and is applicatively no longer exposed.
+We don't need an additional grace period mechanism for `contentStart`.
+
+Failing cleanup will lead to the content being eventually updated upon next `browseStart` update.
+
+We will furthermore delete blobStore content upon dequeue, also when the mail had been deleted or purged via MailQueue management APIs.
+
+== Consequences
+
+All Cassandra SSTable before `browseStart` can safely be dropped as part of the timeWindowCompactionStrategy.
+
+Updating browse start will then be two times more expensive as we need to unreference passed slices.
+
+Eventually this will allow reclaiming Cassandra disk space and enforce mail privacy by removing dandling metadata.
+
+== Alternative
+
+A https://github.com/linagora/james-project/pull/3291#pullrequestreview-393501339[proposal] was made to piggy back  cleanup upon dequeue/delete operations.
+The dequeuer/deleter then directly removes the related metadata from  `enqueuedMailsV3` and `deletedMailsV2`.
+This simpler design however have several flaws:
+
+* if the cleanup fails for any reason then it cannot be retried in the future.
+There will be no way of cleaning up the   related data.
+* this will end up tumbstoning live slices potentially harming browse/delete/browse start updates performance.
+* this proposition don't leverage as efficiently timeWindowCompactionStrategy.
diff --git a/docs/modules/development/pages/adr/0033-use-scala-in-event-sourcing-modules.md.adoc b/docs/modules/development/pages/adr/0033-use-scala-in-event-sourcing-modules.md.adoc
new file mode 100644
index 0000000..7b536bb
--- /dev/null
+++ b/docs/modules/development/pages/adr/0033-use-scala-in-event-sourcing-modules.md.adoc
@@ -0,0 +1,33 @@
+= 33. Use scala in event sourcing modules
+
+Date: 2019-12-13
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+At the time being James use the scala programming language in some parts of its code base, particularily for implementing the Distributed Task Manager, which uses the event sourcing modules.
+
+The module `event-store-memory` already uses Scala.
+
+== Decision
+
+What is proposed here, is to convert in Scala the event sourcing modules.
+The modules concerned by this change are:
+
+* `event-sourcing-core`
+* `event-sourcing-pojo`
+* `event-store-api`
+* `event-store-cassandra`
+
+== Rationales
+
+This will help to standardize the `event-*` modules as `event-store-memory` is already written in Scala.
+This change will avoid interopability concerns with the main consumers of those modules which are already written in Scala: see the distributed task manager.
+In the long run this will allow to have a stronger typing in those parts of the code and to have a much less verbose code.
+
+== Consequences
+
+We will have to mitigate the pervading of the Scale API in the Java code base by implementing Java facade.
diff --git a/docs/modules/development/pages/adr/0034-mailbox-api-visibility-and-usage.md.adoc b/docs/modules/development/pages/adr/0034-mailbox-api-visibility-and-usage.md.adoc
new file mode 100644
index 0000000..626f3c2
--- /dev/null
+++ b/docs/modules/development/pages/adr/0034-mailbox-api-visibility-and-usage.md.adoc
@@ -0,0 +1,49 @@
+= 34. Mailbox API visibility and usage
+
+Date: 2020-04-27
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+All mailboxes implementations rely on `mailbox-store` module that defines some common tools to implement the `mailbox-api` (representing the API defining how to use a mailbox).
+
+For example, a `CassandraMailboxmanager` has to extend `StoreMailboxManager` (that implements `Mailboxmanager` from the  `mailbox-api`) that requires the implementation of some ``Mapper``s.
+
+``Mapper``s are designed to provide low-level functions and methods on mailboxes.
+It's recurrent that we are tempted in  James, outside of the `mailbox` modules, to rely on some of those common tools located in `mailbox-store` to have an  easier access on some user's mailboxes or messages.
+
+Like for example, using a `Mapper` outside to be able to retrieve a message with only its `MessageId`, which is not  currently possible at the ``Manager``'s level, which tends to violate ``mailbox-api``'s role and primary mission.
+
+As a matter of fact, we have currently such uses of `mailbox-store` in James:
+
+* `mailbox-adapter` because `Authenticator` and `Authorizator` are part of the `mailbox-store`
+
+The manager layer do further validation including right checking, event dispatching (resulting in message search index  indexing, current quota calculation mainly), quota validation.
+Not relying on the manager layer is thus error prone  and can lead to security vulnerabilities.
+
+== Decision
+
+We should never rely on classes defined in `mailbox-store` outside of the `mailbox` modules (except on some cases  limited to the test scope).
+The right way would be to always rely on the ``Manager``s defined in `mailbox-api` module to  access mailboxes and messages, as the `mailbox-api` module defines the API on how to use a mailbox.
+
+We should ensure the correctness of ``Manager``s implementations by providing contract tests and not by sharing abstract  classes.
+
+Regarding the modules wrongly relying already on `mailbox-store`, we can:
+
+* `mailbox-adapter`: move `Authenticator` and `Authorizator` to `mailbox-api`
+
+== Consequences
+
+We need to introduce some refactorings to be able to rely fully on `mailbox-api` in new emerging cases.
+For example,  our `mailbox-api` still lacks APIs to handle messages by their MessageId.
+It  creates some issues for rebuilding a single message fast view projection, or the reindexation of a single message.
+
+A refactoring of the session would be thus necessary to bypass such limitation access on a single message without  knowing its user from the `mailbox-api` module.
+
+== References
+
+* https://github.com/linagora/james-project/pull/3035#discussion_r363684700[Discussions around rebuild a single message fast view projection]
+* https://www.mail-archive.com/server-dev@james.apache.org/msg64120.html[General mailing list discussion on the session refactoring]
diff --git a/docs/modules/development/pages/adr/0035-distributed-listeners-configuration.md.adoc b/docs/modules/development/pages/adr/0035-distributed-listeners-configuration.md.adoc
new file mode 100644
index 0000000..025077f
--- /dev/null
+++ b/docs/modules/development/pages/adr/0035-distributed-listeners-configuration.md.adoc
@@ -0,0 +1,137 @@
+= 35. Distributed Mailbox Listeners Configuration
+
+Date: 2020-04-23
+
+== Status
+
+Proposed
+
+Supercedes xref:0026-removing-configured-additional-mailboxListeners.adoc[26.
+Removing a configured additional MailboxListener]
+
+== Context
+
+James enables a user to register additional mailbox listeners.
+
+The distributed James server is handling mailbox event processing (mailboxListener execution) using a RabbitMQ work-queue per listener.
+
+Mailbox listeners can be registered to be triggered every time an event is generated by user interaction with their  mailbox.
+They are being executed in a distributed fashion following the workqueue messaging pattern.
+The "group" is an  attribute of the mailbox listener identifying to which work queue they belong.
+
+More information about this component can be found in xref:0037-eventbus.adoc[ADR 0036].
+
+Currently, mailbox listeners are determined by the guice bindings of the server and additional mailbox listeners defined via configuration files.
+
+While the configuration might be specific for each James server, what actually is defined in RabbitMQ is common.
+Heterogeneous configuration might then result in unpredictable RabbitMQ resource status.
+This was left as a limitation of xref:0026-removing-configured-additional-mailboxListeners.adoc[26.
+Removing a configured additional MailboxListener].
+
+== Decision
+
+We need to centralize the definition of mailbox listeners.
+
+An event sourcing system will track the configured mailbox listeners.
+
+It will have the following commands:
+
+* *AddListener*: Add a given listener.
+This should be rejected if the group is already used.
+* *RemoveListener*: Remove a given listener.
+
+Configuration changes are not supported.
+The administrator is expected to remove the given listener, then add it again.
+
+It will have the following events:
+
+* *ListenerAdded*: A mailbox listener is added
+* *ListenerRemoved*: A mailbox listener is removed
+
+A subscriber will react to these events to modify the RabbitMQ resource accordingly by adding queues, adding or removing bindings.
+
+This event sourcing system differs from the one defined in xref:0026-removing-configured-additional-mailboxListeners.adoc[26.
+Removing a configured additional MailboxListener] by the fact that we should also keep track of listener configuration.
+
+Upon start, James will ensure the *configured mailbox listener event sourcing system* contains the guice injected  listeners, and add them if missing (handling the RabbitMQ bindings by this mean), then starts the eventBus which will consume the given queues.
+
+If a listener is configured with a class unknown to James, the start-up fails and James starts in a degraded state  allowing to unconfigure the faulty listener.
+This downgraded state will be described in a separate ADR and the link will be updated here.
+
+This differs from xref:0026-removing-configured-additional-mailboxListeners.adoc[26.
+Removing a configured additional MailboxListener] by the fact we no longer need to register all listeners at once.
+
+A WebAdmin endpoint will allow:
+
+* *to add a listener* to the one configured.
+Such a call:
+ ** Will fail if the listener class is not on the local classpath, or if the corresponding group already used within   the *configured mailbox listener aggregate*.
+ ** Upon success the listener is added to the *configured mailbox listener aggregate*, and the listener is   registered locally.
+* *to remove a listener*.
+Such a call:
+ ** Will fail if the listener is required by Guice bindings on the current server or if the listener is not configured.
+ ** Upon success, the listener is removed from the *configured mailbox listener aggregate*, and the listener is   unregistered locally.
+
+A broadcast on the event bus will be attempted to propagate topology changes, by the mean of a common registrationKey  to all nodes, a "TopologyChanged" event, and a mailbox listener starting the MailboxListeners on local node upon topology changes.
+`registrationKey` concept is explained in xref:0037-eventbus.adoc[ADR 0036].
+
+If a listener is added but is not in the classpath, an ERROR log is emitted.
+This can happen during a rolling upgrade, which defines a new guice binding for a new mailbox listener.
+Events will still be emitted (and consumed by other James servers) however a local James upgrade will be required to effectively be able to start processing these events.
+The  binding will not need to be redefined.
+
+We will also expose an endpoint listing the groups currently in use, and for each group the associated configuration, if  any.
+This will query the *configured mailbox listener aggregate*.
+
+We will introduce a health check to actually ensure that RabbitMQ resources match the configured listeners, and propose a WebAdmin endpoint to add/remove bindings/queue in a similar fashion of what had been proposed in  xref:0026-removing-configured-additional-mailboxListeners.adoc[26.
+Removing a configured additional MailboxListener].
+This  can happen if the James server performing the listener registration fails to create the group/queue.
+This health check  will also report if this James server does not succeed to run a given listener, for instance if its class is not on the  classpath.
+
+== Consequences
+
+All products other than "Distributed James" are unchanged.
+
+All the currently configured additional listeners will need to be registered.
+
+The definition of mailbox listeners is thus centralized and we are not exposed to an heterogeneous configuration  incident.
+
+Mailbox listeners no longer required by guice will still need to be instantiable (even with empty content).
+They will  be considered as additional listener thus requiring explicit admin unconfiguration, which will be mentioned in the  related upgrade instructions.
+Read notes about <<rolling-upgrade-scenari,rolling upgrade scenarii>>.
+
+<<deploying-a-new-custom-listener,Deploying a new custom listener>> also describes how to deploy new custom listeners.
+
+Integration tests relying on additional mailbox listeners of the distributed James product will require to be ported to  perform additional mailbox listener registry with this WebAdmin endpoint.
+JMAP SpamAssassin, quota mailing tests are  concerned.
+
+== Notes
+
+== Broadcast of topology changes
+
+=== Rolling upgrade scenarii
+
+During a rolling upgrade, the james version is heterogeneous across the cluster, and so might be the mailbox listeners required at the Guice level.
+
+*case 1*: James Server version 1 does not require listener A, James server version 2 requires listener A.
+
+Since listener A is registered, James server version 1 cannot be rebooted without being upgraded first.
+(As listener A  cannot be instantiated)
+
+*case 2*: James Server version 1 requires listener A, James server version 2 does not require listener A.
+
+Upgrading to James version 2 means that listener A is still registered as an additional listener, it needs to be  manually unconfigured once the rolling upgrade finished.
+Which is acceptable in upgrade instruction.
+We need to make  sure the listeners could still be instantiated (even with empty code) for a transition period.
+
+== Deploying a new custom listener
+
+Given a new custom listener, not yet deployed in Distributed James cluster,
+
+To deploy it, an admin needs to follow these steps:
+
+* Add the jar in `extension-jars` folder for each James server
+ ** As `extension-jars` is read at instantiation time, no reboot is required to instantiate the new listener.
+* Call the webadmin endpoint alongside with listener specific configuration to enable the given custom listener.
+The bindings for the new listener will be created and a listener will be consuming its queue on the James server that  had been treating the request.
+* Broadcast of topology changes will ensure the new custom additional mailbox listener will then be instantiated  everywhere without a reboot.
diff --git a/docs/modules/development/pages/adr/0036-against-use-of-conditional-statements-in-guice-modules.md.adoc b/docs/modules/development/pages/adr/0036-against-use-of-conditional-statements-in-guice-modules.md.adoc
new file mode 100644
index 0000000..47e9bc0
--- /dev/null
+++ b/docs/modules/development/pages/adr/0036-against-use-of-conditional-statements-in-guice-modules.md.adoc
@@ -0,0 +1,112 @@
+= 36. Against the use of conditional statements in Guice modules
+
+Date: 2019-12-29
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+James products rely historically on Spring for dependency injection.
+It doesn't use last major Spring version (4.x instead of 5.x).
+James uses Spring in a way that enables overriding any class via a configuration file thus endangering overall correctness by giving too much  power to the user.
+
+James proposes several implementations for each of the interfaces it defines.
+The number of possible combinations of implementations is thus really high (like factorial(n) with n > 10).
+It makes it unpractical to run tests for each  possible component combination.
+We run integration tests for combinations that we decide brings the more value to the users.
+Spring product defeats this testing logic  by allowing the user arbitrary classes combination, which is likely not being tested.
+
+Instead of having a single product allowing all component combination, we rather have  several products each one exposing a single component combination.
+Components are defined by code in a static fashion.
+We thus can provide a decent level of QA for these products.
+Overriding components requires explicit code modification  and recompilation, warning the user about the impact of the choices he does, and lowering the project's responsibility.
+Guice had been enacted as a way to reach that goal.
+
+With Guice we expose only supported, well tested combinations of components, thus addressing the combination issue.
+
+Spring application often bring dependencies conflicts, for example between Lucene and ElasticSearch  components, leading to potential runtime or compile time issues.
+Instead of having a single big application being able  to instantiate each and every component application, we have several products defining their dependencies in a  minimalistic way, relying only on the components implementation that are needed.
+
+Here is the list of products we provide:
+
+* In-Memory: A memory based James server, mainly for testing purposes
+* Distributed James: A scalable James server, storing data in various data stores.
+Cassandra is used for metadata,   ElasticSearch for search, RabbitMQ for messaging, and ObjectStorage for blobs.
+* Cassandra: An implementation step toward Distributed James.
+It does not include messaging and ObjectStorage and   should not be run in a cluster way but is still relevant for good performance.
+* JPA: A JPA and Lucene based implementation of James.
+Only Derby driver is currently supported.
+* JPA with SMTP only using Derby: A minimalist SMTP server based on JPA storage technology and Derby driver
+* JPA with SMTP only using MariaDB: A minimalist SMTP server based on JPA storage technology and MariaDB driver
+
+Some components however do have several implementations a user can choose from in a given product.
+This is the case for:
+
+* BlobExport: Exporting a blob from the blobStore to an external user.
+Two implementations are currently supported:   localFiles and LinShare.
+* Text extraction: Extracting text from attachment to enable attachment search.
+There is a Tika implementation, but   lighter JSOUP based, as well as no text extraction options are also available.
+
+In order to keep the number of products low, we decided to use conditional statements in modules based on the  configuration to select which one to enable at runtime.
+Eventually defeating the Guice adoption goals mentioned above.
+
+Finally, Blob Storing technology offers a wide combination of technologies:
+
+* ObjectStorage in itself could implement either Swift APIs or Amazon S3 APIs
+* We decided to keep supporting Cassandra for blob storing as an upgrade solution from Cassandra product to Distributed  James for existing users.
+This option also makes sense for small data-sets (typically less than a TB) where storage cost are less  of an issue and don't need to be taken into account when reasoning about performance.
+* Proposals such as xref:0014-blobstore-storage-policies.adoc[HybridBlobStore] and then  xref:0025-cassandra-blob-store-cache.adoc[Cassandra BlobStore cache] proposed to leverage Cassandra as a performance  (latency) enhancer for ObjectStorage technologies.
+
+Yet again it had been decided to use conditional statements in modules in order to lower the number of products.
+
+However, some components requires expensive resource initialization.
+These operations are performed via a separate module that needs to be installed based on the configuration.
+For instance  xref:0025-cassandra-blob-store-cache.adoc[Cassandra BlobStore cache] requires usage of an additional cache keyspace that  represents a cost and an inconvenience we don't want to pay if we don't rely on that cache.
+Not having the cache module  thus enables quickly auditing that the caching cassandra session is not initialized.
+See  https://github.com/linagora/james-project/pull/3261#pullrequestreview-389804841[this comment] as well as  https://github.com/linagora/james-project/pull/3261#issuecomment-613911695[this comment].
+
+=== Audit
+
+The following modules perform conditional statements upon injection time:
+
+* BlobExportMechanismModule : Choice of the export mechanism
+* ObjectStorageDependenciesModule::selectBlobStoreBuilder: Choice between S3 and Swift ObjectStorage technologies
+* TikaMailboxModule::provideTextExtractor: Choice of text extraction technology
+* BlobStoreChoosingModule::provideBlobStore: Choice of BlobStore technology: Cassandra, ObjectStorage or Hybrid
+* https://github.com/linagora/james-project/pull/3319[Cached blob store] represents a similar problem: should the   blobStore be wrapped by a caching layer?
+
+Cassandra and Distributed products are furthermore duplicated to offer a version supporting LDAP authentication.
+JPA  product does not offer LDAP support.
+
+== Decision
+
+We should no longer rely on conditional statements in Guice module.
+
+Guice modules combination choice should be decided before starting the dependency injection stage.
+
+Each component choice needs to be abstracted by a related configuration POJO.
+
+Products will, given the set of configuration POJOs, generated the modules it should rely on during the dependency  injection stage.
+
+An INFO log with the list of modules used to create its Guice injector.
+This enables easy diagnose of the running  components via the selected module list.
+It exposes tested, safe choices to the user while limiting the Guice products  count.
+
+== Consequences
+
+Component combination count keeps unchanged for Guice products, but the run combination is explicit.
+QA needs are  unchanged.
+
+Integration tests needs to be adapted to accept component choice configuration POJO.
+
+The following conditional statements in guice modules needs to be removed :
+
+* https://github.com/linagora/james-project/pull/3319[Cached blob store pull request] addresses   ObjectStorageDependenciesModule::selectBlobStoreBuilder and Cassandra Blob Store Cache conditional statement.
+* https://github.com/linagora/james-project/pull/3099[S3 native blobStore implementation] along side with S3 endpoints  support as part of Swift removes the need to select the Object Storage implementation.
+* Follow up work needs to be plan concerning `BlobExportMechanismModule` and `TikaMailboxModule::provideTextExtractor`.
+
+We furthermore need to enable a module choice for LDAP on top of other existing products.
+We should remove LDAP variations for LDAP products.
+Corresponding docker image will be based on their non LDAP version, overriding the `usersrepository.xml` configuration file, be marked as deprecated and eventually removed.
diff --git a/docs/modules/development/pages/adr/0037-eventbus.md.adoc b/docs/modules/development/pages/adr/0037-eventbus.md.adoc
new file mode 100644
index 0000000..5a8b2e5
--- /dev/null
+++ b/docs/modules/development/pages/adr/0037-eventbus.md.adoc
@@ -0,0 +1,59 @@
+= 37. Event bus
+
+Date: 2020-05-05
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Many features rely on behaviors triggered by interactions with the mailbox API main interfaces (`RightManager`, `MailboxManager`, `MessageManager`, `MessageIdManager`).
+We need to provide a convenient extension mechanism for  organizing the execution of these behaviours, provide retries and advanced error handling.
+
+Also, protocols enable notifications upon mailbox modifications.
+This is for instance the case for `RFC-2177 IMAP IDLE`, leveraged for `RFC-3501 IMAP unsolicitated notifications` when selecting a Mailbox, as well as maintaining the  `+IMAP Message Sequence Number <-> Unique IDentifier+` MSN \<-> UID mapping.
+Changes happening for a specific entity  (mailbox) need to be propagated to the relevant listeners.
+
+== Decision
+
+James mailbox component, a core component of James handling the storage of mails and mailboxes, should use an event  driven architecture.
+
+It means every meaningful action on mailboxes or messages triggers an event for any component to react to that event.
+
+`MailboxListener` allows executing actions upon mailbox events.
+They could be used for a wide variety of purposes, like  enriching mailbox managers features or enabling user notifications upon mailboxes operations performed by other devices  via other protocol sessions.
+
+Interactions happen via the managers (`RightManager`, `MailboxManager`, `MessageManager`, `MessageIdManager`) which emit an event on the `EventBus`, which will ensure the relevant ``MailboxListener``s will be executed at least once.
+
+`MailboxListener` can be registered in a work queue fashion on the `EventBus`.
+Each work queue corresponds to a given  MailboxListener class with the same configuration, identified by their group.
+Each event is executed at least once within a James cluster, errors are retried with an exponential back-off delay.
+If the execution keeps failing, the event  is stored in `DeadLetter` for later reprocessing, triggered via WebAdmin.
+
+Guice products enable the registration of additional mailbox listeners.
+A user can furthermore define its own  mailboxListeners via the use of `extension-jars`.
+
+MailboxListener can also be registered to be executed only on events concerning a specific entity (eg.
+a mailbox).
+The  `registrationKey` is identifying entities concerned by the event.
+Upon event emission, the manager will indicate the  `registrationKey` this event should be sent to.
+A mailboxListener will thus only receive the event for the registration  key it is registered to, in an at least once fashion.
+
+== Consequences
+
+We need to provide an `In VM` implementation of the EventBus for single server deployments.
+
+We also need to provide xref:0038-distributed-eventbus.adoc[a distributed event bus implementation].
+
+== Current usages
+
+The following features are implemented as Group mailbox listeners:
+
+* Email indexing in Lucene or ElasticSearch
+* Deletion of mailbox annotations
+* Cassandra Message metadata cleanup upon deletion
+* Quota updates
+* Quota indexing
+* Over Quota mailing
+* SpamAssassin Spam/Ham reporting
diff --git a/docs/modules/development/pages/adr/0038-distributed-eventbus.md.adoc b/docs/modules/development/pages/adr/0038-distributed-eventbus.md.adoc
new file mode 100644
index 0000000..d96b64e
--- /dev/null
+++ b/docs/modules/development/pages/adr/0038-distributed-eventbus.md.adoc
@@ -0,0 +1,44 @@
+= 38. Distributed Event bus
+
+Date: 2020-05-25
+
+== Status
+
+Accepted (lazy consensus)
+
+== Context
+
+Read xref:0037-eventbus.adoc[Event Bus ADR] for context.
+
+Given several James servers, we need them to share a common EventBus.
+
+This:
+
+* Ensures a better load balancing for `group mailbox listners`.
+* Is required for correctness of notifications (like IMAP IDLE).
+
+== Decision
+
+Provide a distributed implementation of the EventBus leveraging RabbitMQ.
+
+Events are emitted to a single Exchange.
+
+Each group will have a corresponding queue, bound to the main exchange, with a default routing key.
+Each eventBus will consume this queue and execute the relevant listener, ensuring at least once execution at the cluster level.
+
+Retries are managed via a dedicated exchange for each group: as we need to count retries, the message headers need to  be altered and we cannot rely on rabbitMQ build in retries.
+Each time the execution fails locally, a new event is emitted  via the dedicated exchange, and the original event is acknowledged.
+
+Each eventBus will have a dedicated exclusive queue, bound to the main exchange with the `registrationKeys` used by local  notification mailboxListeners (to only receive the corresponding subset of events).
+Errors are not retried for  notifications, failures are not persisted within `DeadLetter`, achieving at most once event delivery.
+
+== Related ADRs
+
+The implementation of the the distributed EventBus suffers from the following flows:
+
+* xref:0026-removing-configured-additional-mailboxListeners.adoc[Removing a configured additional MailboxListener]
+* xref:0035-distributed-listeners-configuration.adoc[Distributed Mailbox Listeners Configuration] also covers more in details  topology changes and supersedes ADR 0026.
+
+The following enhancement have furthermore been contributed:
+
+* xref:0027-eventBus-error-handling-upon-dispatch.adoc[EventBus error handling upon dispatch]
diff --git a/docs/modules/development/pages/adr/0039-distributed-blob-garbage-collector.md.adoc b/docs/modules/development/pages/adr/0039-distributed-blob-garbage-collector.md.adoc
new file mode 100644
index 0000000..e19a8bb
--- /dev/null
+++ b/docs/modules/development/pages/adr/0039-distributed-blob-garbage-collector.md.adoc
@@ -0,0 +1,687 @@
+= 39. Distributed blob garbage collector
+
+Date: 2020-02-18
+
+== Status
+
+Proposed
+
+== Context
+
+The body, headers, attachments of the mails are stored as blobs in a blob store.
+In order to save space in those stores, those blobs are de-duplicated using a hash of their content.
+To attain that the current blob store will read the content of the blob before saving it, and generate its id based on a hash of this content.
+This way two blobs with the same content will share the same id and thus be saved only once.
+This makes the safe deletion of one of those blobs a non trivial problem as we can't delete one blob without ensuring that all references to it are themselves deleted.
+For example if two messages share the same blob, when we delete one message there is at the time being no way to tell if the blob is still referenced by another message.
+
+== Decision
+
+To address this issue, we propose to implement a distributed blob garbage collector built upon the previously developed Distributed Task Manager.
+The de-duplicating blob store will keep track of the references pointing toward a blob in a `References` table.
+It will also keep track of the deletion requests for a blob in a `Deletions` table.
+When the garbage collector algorithm runs it will fetch from the `Deletions` table the blobs considered to be effectively deleted, and will check in the `References` table if there are still some references to them.
+If there is no more reference to a blob, it will be effectively deleted from the blob store.
+
+To avoid concurrency issues, where we could garbage collect a blob at the same time a new reference to it appear, a `reference generation` notion will be added.
+The de-duplicating id of the blobs which before where constructed using only the hash of their content,  will now include this `reference generation` too.
+At a given interval a new `reference generation` will be emitted, since then all new blobs will point to this new generation.
+
+So a `garbage collection iteration` will run only on the `reference generation` `n-2` to avoid concurrency issues.
+
+The switch of generation will be triggered by a task running on the distributed task manager.
+This task will emit an event into the event sourcing system to increment the `reference generation`.
+
+== Alternatives
+
+Not de-duplicating the blobs' content, this simple approach which involves storing the same blob a lot of times can in some scenario be really slow and costly.
+Albeit it can in some case be preferred for the sake of simplicity, data security...
+
+== Consequences
+
+This change will necessitate to extract the base blob store responsibilities (store a blob, delete a blob, read a blob) from the current blob store implementation which is doing the de-duplication, id generation...
+The garbage collector will use this low level blob store in order to effectively delete the blobs.
+
+One other consequence of this work, is the fact that there will be no  de-duplication on different `reference generation`, i.e two blobs with the same content will be stored twice now, if they were created during two different `reference generation`.
+
+When writing a blob into the de-duplicating blob store, we will need to specify the reference to the object (MessageId, AttachmentId...) we store the blob for.
+This can make some components harder to implement as we will have to propagate the references.
+
+Since we will not build a distributed task scheduler.
+To increment the `reference generation` and launch periodically a `garbage collection iteration`, the scheduling will be done by an external scheduler (cron job, kubernetes cronjob ...)  which will call a webadmin endpoint to launch this task periodically.
+
+== Algorithm visualisation
+
+=== Generation 1 and Iteration 1
+
+* Events
+ ** `rg1` reference generation is emitted
+ ** `gci1` garbage collection iteration is emitted
+ ** An email is sent to `user1`, a `m1` message, and a blob `b1` are stored with `rg1`
+ ** An email is sent to `user1` and `user2`, `m2` and `m3` messages, and a blob `b2` are stored with `rg1`
+
+==== Tables
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b1
+| rg1
+
+| b2
+| rg1
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m1
+| b1
+| rg1
+
+| m2
+| b2
+| rg1
+
+| m3
+| b2
+| rg1
+|===
+
+===== Deletions
+
+Empty
+
+=== Generation 2 / Iteration 2
+
+* Events
+ ** `rg2` reference generation is emitted
+ ** `gci2` garbage collection iteration is emitted
+ ** An email is sent to `user1`, a `m4` message, and a blob `b3` are stored with `rg2`
+ ** An email is sent to `user1` and `user2`, `m5` and `m6` messages, and a blob `b4` are stored with `rg2`
+
+==== Tables
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b1
+| rg1
+
+| b2
+| rg1
+
+| b3
+| rg2
+
+| b4
+| rg2
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m1
+| b1
+| rg1
+
+| m2
+| b2
+| rg1
+
+| m3
+| b2
+| rg1
+
+| m4
+| b3
+| rg2
+
+| m5
+| b4
+| rg2
+
+| m6
+| b4
+| rg2
+|===
+
+===== Deletions
+
+Empty
+
+=== Generation 3 / Iteration 3
+
+* Events
+ ** `rg3` reference generation is emitted
+ ** `gci3` garbage collection iteration is emitted
+ ** An email is sent to `user1`, a `m7` message, and a blob `b5` are stored with `rg3`
+ ** An email is sent to `user1` and `user2`, `m8` and `m9` messages, and a blob `b6` are stored with `rg3`
+ ** `user1` deletes `m1`, `m2`, `m7`, and `m8` with `gi3`
+ ** `user2` deletes `m3` with `gi3`
+
+==== Tables: before deletions
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+| rg3
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+| gci3
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b1
+| rg1
+
+| b2
+| rg1
+
+| b3
+| rg2
+
+| b4
+| rg2
+
+| b5
+| rg3
+
+| b6
+| rg3
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m1
+| b1
+| rg1
+
+| m2
+| b2
+| rg1
+
+| m3
+| b2
+| rg1
+
+| m4
+| b3
+| rg2
+
+| m5
+| b4
+| rg2
+
+| m6
+| b4
+| rg2
+
+| m7
+| b5
+| rg3
+
+| m8
+| b6
+| rg3
+
+| m9
+| b6
+| rg3
+|===
+
+===== Deletions
+
+Empty
+
+==== Tables: after deletions
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+| rg3
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+| gci3
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b1
+| rg1
+
+| b2
+| rg1
+
+| b3
+| rg2
+
+| b4
+| rg2
+
+| b5
+| rg3
+
+| b6
+| rg3
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m4
+| b3
+| rg2
+
+| m5
+| b4
+| rg2
+
+| m6
+| b4
+| rg2
+
+| m9
+| b6
+| rg3
+|===
+
+===== Deletions
+
+|===
+| blob id | reference generation id | date | garbage collection iteration id
+
+| b1
+| rg1
+| 10:42
+| gci3
+
+| b2
+| rg1
+| 10:42
+| gci3
+
+| b2
+| rg1
+| 13:37
+| gci3
+
+| b5
+| rg3
+| 10:42
+| gci3
+
+| b6
+| rg3
+| 10:42
+| gci3
+|===
+
+==== Running the algorithm
+
+* fetch `Deletions` for `gci3` in `deletions`
+* find distinct `reference-generation-id` of `deletions` in `generations = {rg1, rg3}`
+* For each generation
+ ** _rg1_
+  *** filter `deletions` to keep only `rg1` entries and extract `blob-ids` in `concernedBlobs = {b1, b2}`
+  *** fetch all references to `concernedBlobs` and build a Bloom-Filter in `foundedReferences = {}`
+  *** filter `concernedBlobs` to keep only those which are not present in `foundedReferences` in `blobsToDelete = {b1, b2}`
+  *** Remove `blobsToDelete` from `Blobs` and `Deletions`
+ ** _rg3_
+  *** filter `deletions` to keep only `rg3` entries and extract `blob-ids` in `concernedBlobs = {b5, b6}`
+  *** fetch all references to `concernedBlobs` and build a Bloom-Filter in `+foundedReferences = {b6}+`
+  *** filter `concernedBlobs` to keep only those which are not present in `foundedReferences` in `+blobsToDelete = {b5}+`
+  *** Remove `blobsToDelete` from `Blobs` and `Deletions`
+
+==== Tables: after garbage collection
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+| rg3
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+| gci3
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b3
+| rg2
+
+| b4
+| rg2
+
+| b6
+| rg3
+|===
+
+===== References
+
+|===
+| message id | blob id | generation id
+
+| m4
+| b3
+| g2
+
+| m5
+| b4
+| g2
+
+| m6
+| b4
+| g2
+
+| m9
+| b6
+| g3
+|===
+
+===== Deletions
+
+|===
+| blob id | reference generation id | date | garbage collection iteration id
+
+| b6
+| rg3
+| 10:42
+| gci3
+|===
+
+=== Generations 4
+
+* Events
+ ** `rg4` reference generation is emitted
+ ** `gci4` garbage collection iteration is emitted
+ ** `user2` deletes `m9` with `gcg4`
+
+==== Tables: before deletions
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+| rg3
+| rg4
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+| gci3
+| gci4
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b3
+| rg2
+
+| b4
+| rg2
+
+| b6
+| rg3
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m4
+| b3
+| rg2
+
+| m5
+| b4
+| rg2
+
+| m6
+| b4
+| rg2
+
+| m9
+| b6
+| rg3
+|===
+
+===== Deletions
+
+|===
+| blob id | reference generation id | date | garbage collection iteration id
+
+| b6
+| rg3
+| 10:42
+| gci3
+|===
+
+==== Tables: after deletions
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+| rg3
+| rg4
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+| gci3
+| gci4
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b3
+| rg2
+
+| b4
+| rg2
+
+| b6
+| rg3
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m4
+| b3
+| rg2
+
+| m5
+| b4
+| rg2
+
+| m6
+| b4
+| rg2
+|===
+
+===== Deletions
+
+|===
+| blob id | reference generation id | date | garbage collection iteration id |
+
+| b6
+| rg3
+| 10:42
+| gci3
+|
+
+| b6
+| rg3
+| 18:42
+| gci4
+|
+|===
+
+==== Running the algorithm
+
+* fetch `Deletions` for `gci4` in `deletions`
+* find distinct `generation-id` of `deletions` in `+generations = {rg3}+`
+* For each generation
+ ** _rg3_
+  *** filter `deletions` to keep only `rg3` entries and extract `blob-ids` in `+concernedBlobs = {b6}+`
+  *** fetch all references to `concernedBlobs` and build a Bloom-Filter in `foundedReferences = {}`
+  *** filter `concernedBlobs` to keep only those which are not present in `foundedReferences` in `+blobsToDelete = {b6}+`
+  *** Remove `blobsToDelete` from `Blobs` and `Deletions`
+
+==== Tables: after garbage collection
+
+===== Generations
+
+|===
+| reference generation id
+
+| rg1
+| rg2
+| rg3
+| rg4
+|===
+
+|===
+| garbage collection iteration id
+
+| gci1
+| gci2
+| gci3
+| gci4
+|===
+
+===== Blobs
+
+|===
+| blob id | reference generation id
+
+| b3
+| rg2
+
+| b4
+| rg2
+|===
+
+===== References
+
+|===
+| message id | blob id | reference generation id
+
+| m4
+| b3
+| rg2
+
+| m5
+| b4
+| rg2
+
+| m6
+| b4
+| rg2
+|===
+
+===== Deletions
+
+Empty
diff --git a/docs/modules/migrated/nav.adoc b/docs/modules/migrated/nav.adoc
new file mode 100644
index 0000000..31f850e
--- /dev/null
+++ b/docs/modules/migrated/nav.adoc
@@ -0,0 +1 @@
+* xref:index.adoc[]
diff --git a/docs/modules/migrated/pages/mailet/quickstart.md.adoc b/docs/modules/migrated/pages/mailet/quickstart.md.adoc
new file mode 100644
index 0000000..6023750
--- /dev/null
+++ b/docs/modules/migrated/pages/mailet/quickstart.md.adoc
@@ -0,0 +1,32 @@
+= Mailets for developers
+
+== Artifact names
+
+All binary (and source) artifacts are available via http://repo.maven.apache.org/maven2[Maven Central].
+
+The project *groupId* is _org.apache.james_ and the artifact names are:
+
+* *apache-mailet-api* - for the Mailet API
+* *apache-mailet-base* - for base Mailets
+* *apache-mailet-standard* - for Standard Mailets
+* *apache-mailet-crypto* - for Crypto Mailets
+* *mailetdocs-maven-plugin* if you wish to extract documentation from sources
+
+Just include something like this in your _pom.xml_
+
+----
+<dependencies>
+    <dependency>
+        <groupId>org.apache.james</groupId>
+        <artifactId>apache-mailet-api</artifactId>
+        <version>3.3.0</version>
+    </dependency>
+    <!-- other dependencies -->
+</dependencies>
+----
+
+=== Write your own mailets
+
+To learn how to write your own mailets please have a look at https://github.com/apache/james-project/blob/master/mailet/base/src/main/java/org/apache/mailet/base/GenericMatcher.java[Generic Matcher] and https://github.com/apache/james-project/blob/master/mailet/base/src/main/java/org/apache/mailet/base/GenericMailet.java[Generic Mailet].
+
+Another good learning source are the unit tests from https://github.com/apache/james-project/tree/master/mailet/standard/src/main/java/org/apache/james/transport[Standard Mailets]
diff --git a/docs/modules/migrated/pages/mailet/release-notes.md.adoc b/docs/modules/migrated/pages/mailet/release-notes.md.adoc
new file mode 100644
index 0000000..3e19f3c
--- /dev/null
+++ b/docs/modules/migrated/pages/mailet/release-notes.md.adoc
@@ -0,0 +1,13 @@
+#Release Notes
+
+Apache JamesProject Team (2012/12/28)
+
+##Release Notes
+
+2.5.0 Release 2.5.0 marks a new milestone in Apache Mailet life.
+All Apache James mailet related sub-projects   (Api, Base, Crypto, Standard) have been merged as a multi-module maven project.
+They share the same version number, a common naming scheme and will be released together.Apart from bug fixing, one   notable release characteristic is that 2.5.0 jars are OSGi bundles.
+
+##Earlier Releases
+
+For information regarding the *Release notes* of earlier releases, please visit link:release-notes-old.html[Old releases] page.
diff --git a/docs/modules/migrated/pages/server/install/guice-cassandra-rabbitmq-swift.md.adoc b/docs/modules/migrated/pages/server/install/guice-cassandra-rabbitmq-swift.md.adoc
new file mode 100644
index 0000000..bea1424
--- /dev/null
+++ b/docs/modules/migrated/pages/server/install/guice-cassandra-rabbitmq-swift.md.adoc
@@ -0,0 +1,97 @@
+= Guice-Cassandra-Rabbitmq-Swift installation guide
+
+== Building
+
+=== Requirements
+
+* Java 11 SDK
+* Docker ∕ ElasticSearch 6.3.2, RabbitMQ Management 3.3.7, Swift ObjectStorage 2.15.1 and Cassandra 3.11.3
+* Maven 3
+
+=== Building the artifacts
+
+An usual compilation using maven will produce two artifacts into server/container/guice/cassandra-rabbitmq-guice/target directory:
+
+* james-server-cassandra-rabbitmq-guice.jar
+* james-server-cassandra-rabbitmq-guice.lib
+
+You can for example run in the base of https://github.com/apache/james-project[this git repository]:
+
+----
+mvn clean install
+----
+
+== Running
+
+=== Requirements
+
+* Cassandra 3.11.3
+* ElasticSearch 6.3.2
+* RabbitMQ-Management 3.8.1
+* Swift ObjectStorage 2.15.1 or Scality S3 server or AWS S3
+
+=== James Launch
+
+To run james, you have to create a directory containing required configuration files.
+
+James requires the configuration to be in a subfolder of working directory that is called *conf*.
+You can get a sample directory for configuration from https://github.com/apache/james-project/tree/master/dockerfiles/run/guice/cassandra-rabbitmq/destination/conf[dockerfiles/run/guice/cassandra-rabbitmq/destination/conf].
+You might need to adapt it to your needs.
+
+You also need to generate a keystore in your conf folder with the following command:
+
+[source,bash]
+----
+$ keytool -genkey -alias james -keyalg RSA -keystore conf/keystore
+----
+
+You need to have a Cassandra, ElasticSearch and RabbitMQ instance running.
+You can either install the servers or launch them via docker:
+
+[source,bash]
+----
+$ docker run -d -p 9042:9042 --name=cassandra cassandra:3.11.3
+$ docker run -d -p 9200:9200 --name=elasticsearch --env 'discovery.type=single-node' docker.elastic.co/elasticsearch/elasticsearch:6.3.2
+$ docker run -d -p 5672:5672 -p 15672:15672 --name=rabbitmq rabbitmq:3.8.1-management
+$ docker run -d -p 5000:5000 -p 8080:8080 -p 35357:35357 --name=swift linagora/openstack-keystone-swift:pike
+----
+
+Once everything is set up, you just have to run the jar with:
+
+[source,bash]
+----
+$ java -Dworking.directory=. -jar target/james-server-cassandra-rabbitmq-guice.jar
+----
+
+==== Using AWS S3 of Scality S3 server
+
+In order to use AWS S3 or a compatible implementation, `blobstore.propeties` has to be filled with:
+
+----
+objectstorage.provider=aws-s3
+objectstorage.namespace=james
+objectstorage.s3.endPoint=http://scality:8080/
+objectstorage.s3.accessKeyId=accessKey1
+objectstorage.s3.secretKey=verySecretKey1
+----
+
+To use Scality S3 server you have to launch it instead of swift container:
+
+ $ docker run -d -p 8080:8000 --name=s3 scality/s3server:6018536a
+
+More information about available options https://hub.docker.com/r/scality/s3server[here].
+
+== Guice-cassandra-rabbitmq-ldap
+
+You can follow the same guide to build and run guice-cassandra-rabbitmq-swift-ldap artifact, except that:
+
+* The *jar* and *libs* needs to be retrieve from server/container/guice/cassandra-rabbitmq-ldap-guice/target after compilation
+* The sample configuration can be found in https://github.com/apache/james-project/tree/master/dockerfiles/run/guice/cassandra-rabbitmq-ldap/destination/conf[dockerfiles/run/guice/cassandra-rabbitmq-ldap/destination/conf]
+* You need to configure James to be connecting to a running LDAP server.
+The configuration file is located in https://github.com/apache/james-project/tree/master/dockerfiles/run/guice/cassandra-rabbitmq-ldap/destination/conf/usersrepository.xml[dockerfiles/run/guice/cassandra-rabbitmq-ldap/destination/conf/usersrepository.xml]
+* You can then launch James via this command:
+
+[source,bash]
+----
+$ java -Dworking.directory=. -jar target/james-server-cassandra-rabbitmq-ldap-guice.jar
+----
diff --git a/docs/modules/migrated/pages/server/install/guice-cassandra.md.adoc b/docs/modules/migrated/pages/server/install/guice-cassandra.md.adoc
new file mode 100644
index 0000000..98c5c31
--- /dev/null
+++ b/docs/modules/migrated/pages/server/install/guice-cassandra.md.adoc
@@ -0,0 +1,74 @@
+= Guice-Cassandra installation guide
+
+== Building
+
+=== Requirements
+
+* Java 11 SDK
+* Docker ∕ ElasticSearch 6.3.2 and Cassandra 3.11.3
+* Maven 3
+
+=== Building the artifacts
+
+An usual compilation using maven will produce two artifacts into server/container/guice/cassandra-guice/target directory:
+
+* james-server-cassandra-guice.jar
+* james-server-cassandra-guice.lib
+
+You can for example run in the base of https://github.com/apache/james-project[this git repository]:
+
+----
+mvn clean install
+----
+
+== Running
+
+=== Requirements
+
+* Cassandra 3.11.3
+* ElasticSearch 6.3.2
+
+=== James Launch
+
+To run james, you have to create a directory containing required configuration files.
+
+James requires the configuration to be in a subfolder of working directory that is called *conf*.
+You can get a sample directory for configuration from https://github.com/apache/james-project/tree/master/dockerfiles/run/guice/cassandra/destination/conf[dockerfiles/run/guice/cassandra/destination/conf].
+You might need to adapt it to your needs.
+
+You also need to generate a keystore in your conf folder with the following command:
+
+[source,bash]
+----
+$ keytool -genkey -alias james -keyalg RSA -keystore conf/keystore
+----
+
+You need to have a Cassandra and an ElasticSearch instance running.
+You can either install the servers or launch them via docker:
+
+[source,bash]
+----
+$ docker run -d -p 9042:9042 --name=cassandra cassandra:3.11.3
+$ docker run -d -p 9200:9200 --name=elasticsearch --env 'discovery.type=single-node' docker.elastic.co/elasticsearch/elasticsearch:6.3.2
+----
+
+Once everything is set up, you just have to run the jar with:
+
+[source,bash]
+----
+$ java -Dworking.directory=. -jar target/james-server-cassandra-guice.jar
+----
+
+== Guice-cassandra-ldap
+
+You can follow the same guide to build and run guice-cassandra-ldap artifact, except that:
+
+* The *jar* and *libs* needs to be retrieve from server/container/guice/cassandra-ldap-guice/target after compilation
+* The sample configuration can be found in https://github.com/apache/james-project/tree/master/dockerfiles/run/guice/cassandra-ldap/destination/conf[dockerfiles/run/guice/cassandra-ldap/destination/conf]
+* You need a running LDAP server to connect to.
+* You can then launch James via this command:
+
+[source,bash]
+----
+$ java -Dworking.directory=. -jar target/james-server-cassandra-ldap-guice.jar
+----
diff --git a/docs/modules/migrated/pages/server/install/guice-jpa-smtp.md.adoc b/docs/modules/migrated/pages/server/install/guice-jpa-smtp.md.adoc
new file mode 100644
index 0000000..b933e09
--- /dev/null
+++ b/docs/modules/migrated/pages/server/install/guice-jpa-smtp.md.adoc
@@ -0,0 +1,48 @@
+= Guice-JPA-SMTP installation guide
+
+== Building
+
+=== Requirements
+
+* Java 11 SDK
+* Docker
+* Maven (optional)
+
+=== Download the artifacts
+
+Download james-jpa-smtp-guice-3.3.0.zip from http://james.apache.org/download.cgi#Apache_James_Server[the download page] and deflate it.
+
+=== (alternative) Building the artifacts
+
+An usual compilation using maven of this https://github.com/apache/james-project[Git repository content] will produce two artifacts into server/container/guice/jpa-smtp/target directory :
+
+* james-server-jpa-smtp-$\{version}.jar
+* james-server-jpa-smtp-$\{version}.lib
+
+To run james, you have to create a directory containing required configuration files names *conf*.
+
+A https://github.com/apache/james-project/tree/master/server/container/guice/jpa-smtp/sample-configuration[sample directory] is provided with some default value you may need to replace.
+
+== Running
+
+=== James Launch
+
+Edit the configuration to match your needs.
+
+You also need to generate a keystore with the following command :
+
+[source,bash]
+----
+$ keytool -genkey -alias james -keyalg RSA -keystore conf/keystore
+----
+
+Once everything is set up, you just have to run the jar with :
+
+[source,bash]
+----
+$ java -classpath 'james-server-jpa-smtp-guice.jar:james-server-jpa-smtp-guice.lib/*' \
+    -javaagent:james-server-jpa-smtp-guice.lib/openjpa-2.4.2.jar \
+    -Dlogback.configurationFile=conf/logback.xml \
+    -Dworking.directory=. \
+    org.apache.james.JPAJamesServerMain
+----
diff --git a/docs/modules/migrated/pages/server/install/guice-jpa.md.adoc b/docs/modules/migrated/pages/server/install/guice-jpa.md.adoc
new file mode 100644
index 0000000..561b851
--- /dev/null
+++ b/docs/modules/migrated/pages/server/install/guice-jpa.md.adoc
@@ -0,0 +1,55 @@
+= Guice-JPA installation guide
+
+== Building
+
+=== Requirements
+
+* Java 11 SDK
+* Maven 3 (optional)
+
+=== Download the artifacts
+
+Download james-jpa-guice-3.3.0.zip from http://james.apache.org/download.cgi#Apache_James_Server[the download page] and deflate it.
+
+=== (alternative) Building the artifacts
+
+An usual compilation using maven of this https://github.com/apache/james-project[Git repository content] will produce two artifacts into server/container/guice/jpa-guice/target directory:
+
+* james-server-jpa-guice.jar
+* james-server-jpa-guice.lib
+
+You can for example run in the base of this git repository:
+
+----
+mvn clean install
+----
+
+To run james, you have to create a directory containing required configuration files.
+
+James requires the configuration to be in a subfolder of working directory that is called *conf*.
+You can get a sample directory for configuration from https://github.com/apache/james-project/tree/master/dockerfiles/run/guice/jpa/destination/conf[dockerfiles/run/guice/jpa/destination/conf].
+You might need to adapt it to your needs.
+
+== Running
+
+=== James Launch
+
+Edit the configuration to match your needs.
+
+You also need to generate a keystore in your conf folder with the following command:
+
+[source,bash]
+----
+$ keytool -genkey -alias james -keyalg RSA -keystore conf/keystore
+----
+
+Once everything is set up, you just have to run the jar with:
+
+[source,bash]
+----
+$ java -classpath 'james-server-jpa-guice.jar:james-server-jpa-guice.lib/*' \
+    -javaagent:james-server-jpa-guice.lib/openjpa-3.0.0.jar \
+    -Dlogback.configurationFile=conf/logback.xml \
+    -Dworking.directory=. \
+    org.apache.james.JPAJamesServerMain
+----
diff --git a/docs/modules/migrated/pages/server/manage-cli.md.adoc b/docs/modules/migrated/pages/server/manage-cli.md.adoc
new file mode 100644
index 0000000..d9decc9
--- /dev/null
+++ b/docs/modules/migrated/pages/server/manage-cli.md.adoc
@@ -0,0 +1,337 @@
+= Manage James via the Command Line
+
+With any wiring, James is packed with a command line client.
+
+To use it enter, for Spring distrubution:
+
+----
+./bin/james-cli.sh -h 127.0.0.1 -p 9999 COMMAND
+----
+
+And for Guice distributions:
+
+----
+java -jar /root/james-cli.jar -h 127.0.0.1 -p 9999 COMMAND
+----
+
+The following document will explain you which are the available options for *COMMAND*.
+
+NOTE: the command line before *COMMAND* will be documente as _\{cli}_.
+
+== Navigation menu
+
+* <<Manage_Domains,Manage Domains>>
+* <<Managing_users,Managing users>>
+* <<Managing_mailboxes,Managing mailboxes>>
+* <<Adding_a_message_in_a_mailbox,Adding a message in a mailbox>>
+* <<Managing_mappings,Managing mappings>>
+* <<Manage_quotas,Manage quotas>>
+* <<Re-indexing,Re-indexing>>
+* <<Sieve_scripts_quota,Sieve scripts quota>>
+* <<Switching_of_mailbox_implementation,Switching of mailbox implementation>>
+
+== Manage Domains
+
+Domains represent the domain names handled by your server.
+
+You can add a domain:
+
+----
+{cli} AddDomain domain.tld
+----
+
+You can remove a domain:
+
+----
+{cli} RemoveDomain domain.tld
+----
+
+(Note: associated users are not removed automatically)
+
+Check if a domain is handled:
+
+----
+{cli} ContainsDomain domain.tld
+----
+
+And list your domains:
+
+----
+{cli} ListDomains
+----
+
+== Managing users
+
+NOTE: the following commands are explained with virtual hosting turned on.
+
+Users are accounts on the mail server.
+James can maintain mailboxes for them.
+
+You can add a user:
+
+----
+{cli} AddUser user@domain.tld password
+----
+
+NOTE: the domain used should have been previously created.
+
+You can delete a user:
+
+----
+{cli} RemoveUser user@domain.tld
+----
+
+(Note: associated mailboxes are not removed automatically)
+
+And change a user password:
+
+----
+{cli} SetPassword user@domain.tld password
+----
+
+NOTE: All these write operations can not be performed on LDAP backend, as the implementation is read-only.
+
+Finally, you can list users:
+
+----
+{cli} ListUsers
+----
+
+=== Virtual hosting
+
+James supports virtualhosting.
+
+* If set to true in the configuration, then the username is the full mail address.
+
+The domains then become a part of the user.
+
+_usera@domaina.com and_ _usera@domainb.com_ on a mail server with _domaina.com_ and _domainb.com_ configured are mail addresses that belongs to different users.
+
+* If set to false in the configurations, then the username is the mail address local part.
+
+It means that a user  is automatically created for all the domains configured on your server.
+
+_usera@domaina.com and_ _usera@domainb.com_ on a mail server with _domaina.com_ and _domainb.com_ configured are mail addresses that belongs to the same users.
+
+Here are some sample commands for managing users when virtual hosting is turned off:
+
+----
+{cli} AddUser user password
+{cli} RemoveUser user
+{cli} SetPassword user password
+----
+
+== Managing mailboxes
+
+An administrator can perform some basic operation on user mailboxes.
+
+Note on mailbox formatting: mailboxes are composed of three parts.
+
+* The namespace, indicating what kind of mailbox it is.
+(Shared or not?).
+The value for users mailboxes is #private . Note that for now no other values are supported as James do not support shared mailboxes.
+* The username as stated above, depending on the virtual hosting value.
+* And finally mailbox name.
+Be aware that '.' serves as mailbox hierarchy delimiter.
+
+An administrator can delete all of the mailboxes of a user, which is not done automatically when removing a user (to avoid data loss):
+
+----
+{cli} DeleteUserMailboxes user@domain.tld
+----
+
+He can delete a specific mailbox:
+
+----
+{cli} DeleteMailbox #private user@domain.tld INBOX.toBeDeleted
+----
+
+He can list the mailboxes of a specific user:
+
+----
+{cli} ListUserMailboxes user@domain.tld
+----
+
+And finally can create a specific mailbox:
+
+----
+{cli} CreateMailbox #private user@domain.tld INBOX.newFolder
+----
+
+== Adding a message in a mailbox
+
+The administrator can use the CLI to add a message in a mailbox.
+this can be done using:
+
+----
+{cli} ImportEml #private user@domain.tld INBOX.newFolder /full/path/to/file.eml
+----
+
+This command will add a message having the content specified in file.eml (that needs to be at the EML format).
+It will get added in the INBOX.subFolder mailbox belonging to user user@domain.tld.
+
+== Managing mappings
+
+A mapping is a recipient rewritting rule.
+There is several kind of rewritting rules:
+
+* address mapping: rewrite a given mail address into an other one.
+* regex mapping.
+
+You can manage address mapping like (redirects email from fromUser@fromDomain.tld to redirected@domain.new, then deletes the mapping):
+
+----
+{cli} AddAddressMapping fromUser fromDomain.tld redirected@domain.new
+{cli} RemoveAddressMapping fromUser fromDomain.tld redirected@domain.new
+----
+
+You can manage regex mapping like this:
+
+----
+{cli} AddRegexMapping redirected domain.new .*@domain.tld
+{cli} RemoveRegexMapping redirected domain.new .*@domain.tld
+----
+
+You can view mapping for a mail address:
+
+----
+{cli} ListUserDomainMappings user domain.tld
+----
+
+And all mappings defined on the server:
+
+----
+{cli} ListMappings
+----
+
+== Manage quotas
+
+Quotas are limitations on a group of mailboxes.
+They can limit the *size* or the *messages count* in a group of mailboxes.
+
+James groups by defaults mailboxes by user (but it can be overridden), and labels each group with a quotaroot.
+
+To get the quotaroot a given mailbox belongs to:
+
+----
+{cli} GetQuotaroot #private user@domain.tld INBOX
+----
+
+Then you can get the specific quotaroot limitations.
+
+For the number of messages:
+
+----
+{cli} GetMessageCountQuota quotaroot
+----
+
+And for the storage space available:
+
+----
+{cli} GetStorageQuota quotaroot
+----
+
+You see the maximum allowed for these values:
+
+For the number of messages:
+
+----
+{cli} GetMaxMessageCountQuota quotaroot
+----
+
+And for the storage space available:
+
+----
+{cli} GetMaxStorageQuota quotaroot
+----
+
+You can also specify maximum for these values.
+
+For the number of messages:
+
+----
+{cli} SetMaxMessageCountQuota quotaroot value
+----
+
+And for the storage space available:
+
+----
+{cli} SetMaxStorageQuota quotaroot value
+----
+
+With value being an integer.
+Please note the use of units for storage (K, M, G).
+For instance:
+
+----
+{cli} SetMaxStorageQuota someone@apache.org 4G
+----
+
+Moreover, James allows to specify global maximum values, at the server level.
+Note: syntax is similar to what was exposed previously.
+
+----
+{cli} SetGlobalMaxMessageCountQuota value
+{cli} GetGlobalMaxMessageCountQuota
+{cli} SetGlobalMaxStorageQuota value
+{cli} GetGlobalMaxStorageQuota
+----
+
+== Re-indexing
+
+James allow you to index your emails in a search engine, for making search faster.
+Both ElasticSearch and Lucene are supported.
+
+For some reasons, you might want to re-index your mails (inconsistencies across datastore, migrations).
+
+To re-index all mails of all mailboxes of all users, type:
+
+----
+{cli} ReindexAll
+----
+
+And for a precise mailbox:
+
+----
+{cli} Reindex #private user@domain.tld INBOX
+----
+
+== Sieve scripts quota
+
+James implements Sieve (RFC-5228).
+Your users can then writte scripts and upload them to the server.
+Thus they can define the desired behavior upon email reception.
+James defines a Sieve mailet for this, and stores Sieve scripts.
+You can update them via the ManageSieve protocol, or via the ManageSieveMailet.
+
+You can define quota for the total size of Sieve scripts, per user.
+
+Syntax is similar to what was exposed for quotas.
+For defaults values:
+
+----
+{cli} GetSieveQuota
+{cli} SetSieveQuota value
+{cli} RemoveSieveQuota
+----
+
+And for specific user quotas:
+
+----
+{cli} GetSieveUserQuota user@domain.tld
+{cli} SetSieveQuota user@domain.tld value
+{cli} RemoveSieveUserQuota user@domain.tld
+----
+
+== Switching of mailbox implementation
+
+Migration is experimental for now.
+You would need to customize *Spring* configuration to add a new mailbox manager with a different bean name.
+
+You can then copy data accross mailbox managers using:
+
+----
+{cli} CopyMailbox srcBean dstBean
+----
+
+You will then need to reconfigure James to use the new mailbox manager.
diff --git a/docs/modules/migrated/pages/server/manage-guice-distributed-james.md.adoc b/docs/modules/migrated/pages/server/manage-guice-distributed-james.md.adoc
new file mode 100644
index 0000000..ca0809f
--- /dev/null
+++ b/docs/modules/migrated/pages/server/manage-guice-distributed-james.md.adoc
@@ -0,0 +1,597 @@
+= Managing Guice distributed James
+
+This guide aims to be an entry-point to the James documentation for user managing a distributed Guice James server.
+
+It includes:
+
+* Simple architecture explanations
+* Propose some diagnostics for some common issues
+* Present procedures that can be set up to address these issues
+
+In order to not duplicate information, existing documentation will be linked.
+
+Please note that this product is under active development, should be considered experimental and thus targets  advanced users.
+
+== Table of content
+
+* <<Overall_architecture,Overall architecture>>
+* <<Basic_Monitoring,Basic Monitoring>>
+* <<Cassandra_table_level_configuration,Cassandra table level configuration>>
+* <<Deleted_Messages_Vault,Deleted Messages Vault>>
+* <<Elasticsearch_Indexing,ElasticSearch Indexing>>
+* <<Mailbox_Event_Bus,Mailbox Event Bus>>
+* <<Mail_Processing,Mail Processing>>
+* <<Mail_Queue,Mail Queue>>
+* <<Setting_Cassandra_user_permissions,Setting Cassandra user permissions>>
+* <<Solving_cassandra_inconsistencies,Solving cassandra inconsistencies>>
+* <<Updating_Cassandra_schema_version,Updating Cassandra schema version>>
+
+== Overall architecture
+
+Guice distributed James server intends to provide a horizontally scalable email server.
+
+In order to achieve this goal, this product leverages the following technologies:
+
+* *Cassandra* for meta-data storage
+* *ObjectStorage* (S3) for binary content storage
+* *ElasticSearch* for search
+* *RabbitMQ* for messaging
+
+A https://github.com/apache/james-project/blob/master/dockerfiles/run/docker-compose.yml[docker-compose] file is  available to allow you to quickly deploy locally this product.
+
+== Basic Monitoring
+
+A toolbox is available to help an administrator diagnose issues:
+
+* <<Structured_logging_into_Kibana,Structured logging into Kibana>>
+* <<Metrics_graphs_into_Grafana,Metrics graphs into Grafana>>
+* <<Webadmin_Healthchecks,WebAdmin HealthChecks>>
+
+=== Structured logging into Kibana
+
+Read this page regarding link:monitor-logging.html#Guice_products_and_logging[setting up structured logging].
+
+We recommend to closely monitoring *ERROR* and *WARNING* logs.
+Those logs should be considered not normal.
+
+If you encounter some suspicious logs:
+
+* If you have any doubt about the log being caused by a bug in James source code, please reach us via + the bug tracker, the user mailing list or our Gitter channel (see our http://james.apache.org/#second[community page])
+* They can be due to insufficient performance from tier applications (eg Cassandra timeouts).
+In such case we advise  you to conduct a close review of performances at the tier level.
+
+Leveraging filters in Kibana discover view can help filtering out "already known" frequently occurring logs.
+
+When reporting ERROR or WARNING logs, consider adding the full logs, and related data (eg the raw content of a mail  triggering an issue) to the bug report in order to ease resolution.
+
+=== Metrics graphs into Grafana
+
+James keeps tracks of various metrics and allow to easily visualize them.
+
+Read this page for link:metrics.html[explanations on metrics].
+
+Here is a list of https://github.com/apache/james-project/tree/master/grafana-reporting[available metric boards]
+
+Configuration of link:config-elasticsearch.html[ElasticSearch metric exporting] allows a direct display within  https://grafana.com/[Grafana]
+
+Monitoring these graphs on a regular basis allows diagnosing early some performance issues.
+
+If some metrics seem abnormally slow despite in depth database performance tuning, feedback is appreciated as well on  the bug tracker, the user mailing list or our Gitter channel (see our http://james.apache.org/#second[community page]) . Any additional details categorizing the slowness are appreciated as well (details of the slow requests for instance).
+
+=== WebAdmin HealthChecks
+
+James webadmin API allows to run healthChecks for a quick health overview.
+
+Here is related link:manage-webadmin.html#HealthCheck[webadmin documentation]
+
+Here are the available checks alongside the insight they offer:
+
+* *Cassandra backend*: Cassandra storage.
+Ensure queries can be executed on the connection James uses.
+* *ElasticSearch Backend*: ElasticSearch storage.
+Triggers an ElasticSearch health request on indices James uses.
+* *EventDeadLettersHealthCheck*: EventDeadLetters checking.
+* *RabbitMQ backend*: RabbitMQ messaging.
+Verifies an open connection and an open channel are well available.
+* *Guice application lifecycle*: Ensures James Guice successfully started, and is up.
+Logs should contain   explanations if James did not start well.
+* *MessageFastViewProjection*: Follows MessageFastViewProjection cache miss rates and warns if it is below 10%.
+If   this projection is missing, this results in performance issues for JMAP GetMessages list requests.
+WebAdmin offers a  link:manage-webadmin.html#Recomputing_Global_JMAP_fast_message_view_projection[global] and   link:manage-webadmin.html#Recomputing_Global_JMAP_fast_message_view_projection[per user] projection re-computation.
+Note that  as computation is asynchronous, this projection can be slightly out of sync on a normally behaving server.
+
+== Mail Processing
+
+Mail processing allows to take asynchronously business decisions on received emails.
+
+Here are its components:
+
+* The `spooler` takes mail out of the mailQueue and executes mail processing within the `mailet container`.
+* The `mailet container` synchronously executes the user defined logic.
+This 'logic' is written through the use of   `mailet`, `matcher` and `processor`.
+* A `mailet` represents an action: mail modification, envelop modification, a side effect, or stop processing.
+* A `matcher` represents a condition to execute a mailet.
+* A `processor` is a flow of pair of `matcher` and `mailet` executed sequentially.
+The `ToProcessor` mailet is a `goto`   instruction to start executing another `processor`
+* A `mail repository` allows storage of a mail as part of its processing.
+Standard configuration relies on the   following mail repository:
+ ** `cassandra://var/mail/error/` : unexpected errors that occurred during mail processing.
+Emails impacted by    performance related exceptions, or logical bug within James code are typically stored here.
+These mails could be    reprocessed once the cause of the error is fixed.
+The `Mail.error` field can help diagnose the issue.
+Correlation    with logs can be achieved via the use of the `Mail.name` field.
+ ** `cassandra://var/mail/address-error/` : mail addressed to a non-existing recipient of a handled local domain.
+These mails could be reprocessed once the user is created, for instance.
+ ** `cassandra://var/mail/relay-denied/` : mail for whom relay was denied: missing authentication can, for instance,    be a cause.
+In addition to prevent disasters upon miss configuration, an email review of this mail repository can    help refine a host spammer blacklist.
+ ** `cassandra://var/mail/rrt-error/` : runtime error upon Recipient Rewritting occurred.
+This is typically due to a    loop.
+We recommend verifying user mappings via link:manage-webadmin.html#User_Mappings[User Mappings webadmin API]    then once identified break the loop by removing some Recipient Rewrite Table entry via the    link:manage-webadmin.html#Removing_an_alias_of_an_user[Delete Alias],    link:manage-webadmin.html#Removing_a_group_member[Delete Group member],    link:manage-webadmin.html#Removing_a_destination_of_a_forward[Delete forward],    link:manage-webadmin.html#Remove_an_address_mapping[Dele [...]
+The `Mail.error` field can    help diagnose the issue as well.
+Then once the root cause has been addressed, the mail can be reprocessed.
+
+Read link:config-mailetcontainer.html[this] to discover mail processing configuration, including error management.
+
+Currently, an administrator can monitor mail processing failure through `ERROR` log review.
+We also recommend watching  in Kibana INFO logs using the `org.apache.james.transport.mailets.ToProcessor` value as their `logger`.
+Metrics about  mail repository size, and the corresponding Grafana boards are yet to be contributed.
+
+WebAdmin exposes all utilities for  link:manage-webadmin.html#Reprocessing_mails_from_a_mail_repository[reprocessing all mails in a mail repository] or  link:manage-webadmin.html#Reprocessing_a_specific_mail_from_a_mail_repository[reprocessing a single mail in a mail repository].
+
+Also, one can decide to  link:manage-webadmin.html#Removing_all_mails_from_a_mail_repository[delete all the mails of a mail repository]  or link:manage-webadmin.html#Removing_a_mail_from_a_mail_repository[delete a single mail of a mail repository].
+
+Performance of mail processing can be monitored via the  https://github.com/apache/james-project/blob/master/grafana-reporting/MAILET-1490071694187-dashboard.json[mailet grafana board]  and https://github.com/apache/james-project/blob/master/grafana-reporting/MATCHER-1490071813409-dashboard.json[matcher grafana board].
+
+== Mailbox Event Bus
+
+James relies on an event bus system to enrich mailbox capabilities.
+Each operation performed on the mailbox will trigger  related events, that can be processed asynchronously by potentially any James node on a distributed system.
+
+Many different kind of events can be triggered during a mailbox operation, such as:
+
+* `MailboxEvent`: event related to an operation regarding a mailbox:
+ ** `MailboxDeletion`: a mailbox has been deleted
+ ** `MailboxAdded`: a mailbox has been added
+ ** `MailboxRenamed`: a mailbox has been renamed
+ ** `MailboxACLUpdated`: a mailbox got its rights and permissions updated
+* `MessageEvent`: event related to an operation regarding a message:
+ ** `Added`: messages have been added to a mailbox
+ ** `Expunged`: messages have been expunged from a mailbox
+ ** `FlagsUpdated`: messages had their flags updated
+ ** `MessageMoveEvent`: messages have been moved from a mailbox to an other
+* `QuotaUsageUpdatedEvent`: event related to quota update
+
+Mailbox listeners can register themselves on this event bus system to be called when an event is fired, allowing to do different kind of extra operations on the system, like:
+
+* Current quota calculation
+* Message indexation with ElasticSearch
+* Mailbox annotations cleanup
+* Ham/spam reporting to SpamAssassin
+* ...
+
+It is possible for the administrator of James to define the mailbox listeners he wants to use, by adding them in the https://github.com/apache/james-project/blob/master/dockerfiles/run/guice/cassandra-rabbitmq/destination/conf/listeners.xml[listeners.xml] configuration file.
+It's possible also to add your own custom mailbox listeners.
+This enables to enhance capabilities of James as a Mail Delivery Agent.
+You can get more information about those link:config-listeners.html[here].
+
+Currently, an administrator can monitor listeners failures through `ERROR` log review.
+Metrics regarding mailbox listeners can be monitored via https://github.com/apache/james-project/blob/master/grafana-reporting/MailboxListeners-1528958667486-dashboard.json[mailbox_listeners grafana board]  and https://github.com/apache/james-project/blob/master/grafana-reporting/MailboxListeners%20rate-1552903378376.json[mailbox_listeners_rate grafana board].
+
+Upon exceptions, a bounded number of retries are performed (with exponential backoff delays).
+If after those retries the listener is still failing to perform its operation, then the event will be stored in the  link:manage-webadmin.html#Event_Dead_Letter[Event Dead Letter].
+This API allows diagnosing issues, as well as redelivering the events.
+
+To check that you have undelivered events in your system, you can first run the associated with link:manage-webadmin.html#Event_Dead_Letter[event dead letter health check] .You can explore Event DeadLetter content through WebAdmin.
+For this, link:manage-webadmin.html#Listing_mailbox_listener_groups[list mailbox listener groups] you will get a list of groups back, allowing you to check if those contain registered events in each by link:manage-webadmin.html#Listing_failed_events[listing their failed events].
+
+If you get failed events IDs back, you can as well link:manage-webadmin.html#Getting_event_details[check their details].
+
+An easy way to solve this is just to trigger then the link:manage-webadmin.html#Redeliver_all_events[redeliver all events] task.
+It will start  reprocessing all the failed events registered in event dead letters.
+
+If for some other reason you don't need to redeliver all events, you have more fine-grained operations allowing you to link:manage-webadmin.html#Redeliver_group_events[redeliver group events] or even just link:manage-webadmin.html#Redeliver_a_single_event[redeliver a single event].
+
+== ElasticSearch Indexing
+
+A projection of messages is maintained in ElasticSearch via a listener plugged into the mailbox event bus in order to enable search features.
+
+You can find more information about ElasticSearch configuration link:config-elasticsearch.html[here].
+
+=== Usual troubleshooting procedures
+
+As explained in the <<Mailbox_Event_Bus,Mailbox Event Bus>> section, processing those events can fail sometimes.
+
+Currently, an administrator can monitor indexation failures through `ERROR` log review.
+You can as well link:manage-webadmin.html#Listing_failed_events[list failed events] by looking with the group called  `org.apache.james.mailbox.elasticsearch.events.ElasticSearchListeningMessageSearchIndex$ElasticSearchListeningMessageSearchIndexGroup`.
+A first on-the-fly solution could be to just  <<Mailbox_Event_Bus,redeliver those group events with event dead letter>>.
+
+If the event storage in dead-letters fails (for instance in the face of Cassandra storage exceptions),  then you might need to use our WebAdmin reIndexing tasks.
+
+From there, you have multiple choices.
+You can link:manage-webadmin.html#ReIndexing_all_mails[reIndex all mails], link:manage-webadmin.html#ReIndexing_a_mailbox_mails[reIndex mails from a mailbox] or even just link:manage-webadmin.html#ReIndexing_a_single_mail[reIndex a single mail].
+
+When checking the result of a reIndexing task, you might have failed reprocessed mails.
+You can still use the task ID to link:manage-webadmin.html#Fixing_previously_failed_ReIndexing[reprocess previously failed reIndexing mails].
+
+=== On the fly ElasticSearch Index setting update
+
+Sometimes you might need to update index settings.
+Cases when an administrator might want to update index settings include:
+
+* Scaling out: increasing the shard count might be needed.
+* Changing string analysers, for instance to target another language
+* etc.
+
+In order to achieve such a procedure, you need to:
+
+* https://www.elastic.co/guide/en/elasticsearch/reference/6.3/indices-create-index.html[Create the new index] with the right settings and mapping
+* James uses two aliases on the mailbox index: one for reading (`mailboxReadAlias`) and one for writing (`mailboxWriteAlias`).
+First https://www.elastic.co/guide/en/elasticsearch/reference/6.3/indices-aliases.html[add an alias] `mailboxWriteAlias` to that new index, so that now James writes on the old and new indexes, while only keeping reading on the first one
+* Now trigger a https://www.elastic.co/guide/en/elasticsearch/reference/6.3/docs-reindex.html[reindex] from the old index to the new one (this actively relies on `_source` field being present)
+* When this is done, add the `mailboxReadAlias` alias to the new index
+* Now that the migration to the new index is done, you can  https://www.elastic.co/guide/en/elasticsearch/reference/6.3/indices-delete-index.html[drop the old index]
+* You might want as well modify the James configuration file  https://github.com/apache/james-project/blob/master/dockerfiles/run/guice/cassandra-rabbitmq/destination/conf/elasticsearch.properties[elasticsearch.properties] by setting the parameter `elasticsearch.index.mailbox.name` to the name of your new index.
+This is to avoid that James  re-creates index upon restart
+
+NOTE: keep in mind that reindexing can be a very long operation depending on the volume of mails you have stored.
+
+== Solving cassandra inconsistencies
+
+Cassandra backend uses data duplication to workaround Cassandra query limitations.
+However, Cassandra is not doing transaction when writing in several tables,  this can lead to consistency issues for a given piece of data.
+The consequence could be that the data is in a transient state (that should never appear outside of the system).
+
+Because of the lack of transactions, it's hard to prevent these kind of issues.
+We had developed some features to  fix some existing cassandra inconsistency issues that had been reported to James.
+
+Here is the list of known inconsistencies:
+
+* <<Jmap_message_fast_view_projections,Jmap message fast view projections>>
+* <<Mailboxes,Mailboxes>>
+* <<Mailboxes_counters,Mailboxes Counters>>
+* <<Messages,Messages>>
+* <<Quotas,Quotas>>
+* <<Rrt_RecipientRewriteTable_mapping_sources,RRT (RecipientRewriteTable) mapping sources>>
+
+=== Jmap message fast view projections
+
+When you read a Jmap message, some calculated properties are expected to be fast to retrieve, like `preview`, `hasAttachment`.
+James achieves it by pre-calculating and storing them into a caching table (`message_fast_view_projection`).
+Missing caches are populated on message reads and will temporary decrease the performance.
+
+==== How to detect the outdated projections
+
+You can watch the `MessageFastViewProjection` health check at link:manage-webadmin.html#Check_all_components[webadmin documentation].
+It provides a check based on the ratio of missed projection reads.
+
+==== How to solve
+
+Since the MessageFastViewProjection is self healing, you should be concerned only if  the health check still returns `degraded` for a while, there's a possible thing you  can do is looking at James logs for more clues.
+
+=== Mailboxes
+
+`mailboxPath` and `mailbox` tables share common fields like `mailboxId` and mailbox `name`.
+A successful operation of creating/renaming/delete mailboxes has to succeed at updating `mailboxPath` and `mailbox` table.
+Any failure on creating/updating/delete records in `mailboxPath` or `mailbox` can produce inconsistencies.
+
+==== How to detect the inconsistencies
+
+If you found the suspicious `MailboxNotFoundException` in your logs.
+Currently, there's no dedicated tool for that, we recommend scheduling  the SolveInconsistencies task below for the mailbox object on a regular basis,  avoiding peak traffic in order to address both inconsistencies diagnostic and fixes.
+
+==== How to solve
+
+An admin can run offline webadmin  link:manage-webadmin.html#Fixing_mailboxes_inconsistencies[solve Cassandra mailbox object inconsistencies task] in order  to sanitize his mailbox denormalization.
+
+In order to ensure being offline, stop the traffic on SMTP, JMAP and IMAP ports, for example via re-configuration or  firewall rules.
+
+=== Mailboxes Counters
+
+James maintains a per mailbox projection for message count and unseen message count.
+Failures during the denormalization  process will lead to incorrect results being returned.
+
+==== How to detect the inconsistencies
+
+Incorrect message count/message unseen count could be seen in the `Mail User Agent` (IMAP or JMAP).
+Invalid values are reported in the logs  as warning with the following class `org.apache.james.mailbox.model.MailboxCounters` and the following message prefix: `Invalid mailbox counters`.
+
+==== How to solve
+
+Execute the link:manage-webadmin.html#Recomputing mailbox counters[recompute Mailbox counters task].
+This task is not concurrent-safe.
+Concurrent increments & decrements will be ignored during a single mailbox processing.
+Re-running this task may eventually return the correct result.
+
+=== Messages
+
+Messages are denormalized and stored in both `imapUidTable` (source of truth) and `messageIdTable`.
+Failure in the denormalization  process will cause inconsistencies between the two tables.
+
+==== How to detect the inconsistencies
+
+User can see a message in JMAP but not in IMAP, or mark a message as 'SEEN' in JMAP but the message flag is still unchanged in IMAP.
+
+==== How to solve
+
+Execute the link:manage-webadmin.html#Fixing_messages_inconsistencies[solve Cassandra message inconsistencies task].
+This task is not concurrent-safe.
+User actions concurrent to the inconsistency fixing task could result in new inconsistencies  being created.
+However the source of truth `imapUidTable` will not be affected and thus re-running this task may eventually  fix all issues.
+
+=== Quotas
+
+User can monitor the amount of space and message count he is allowed to use, and that he is effectively using.
+James relies on  an event bus and Cassandra to track the quota of an user.
+Upon Cassandra failure, this value can be incorrect.
+
+==== How to detect the inconsistencies
+
+Incorrect quotas could be seen in the `Mail User Agent` (IMAP or JMAP).
+
+==== How to solve
+
+Execute the link:manage-webadmin.html#Recomputing current quotas for users[recompute Quotas counters task].
+This task is not concurrent-safe.
+Concurrent operations will result in an invalid quota to be persisted.
+Re-running this task may  eventually return the correct result.
+
+=== RRT (RecipientRewriteTable) mapping sources
+
+`rrt` and `mappings_sources` tables store information about address mappings.
+The source of truth is `rrt` and `mappings_sources` is the projection table containing all  mapping sources.
+
+==== How to detect the inconsistencies
+
+Right now there's no tool for detecting that, we're proposing a https://issues.apache.org/jira/browse/JAMES-3069[development plan].
+By the mean time, the recommendation is to execute the `SolveInconsistencies` task below  in a regular basis.
+
+==== How to solve
+
+Execute the Cassandra mapping `SolveInconsistencies` task described in link:manage-webadmin.html#Operations_on_mappings_sources[webadmin documentation]
+
+== Setting Cassandra user permissions
+
+When a Cassandra cluster is serving more than a James cluster, the keyspaces need isolation.
+It can be achieved by configuring James server with credentials preventing access or modification of other keyspaces.
+
+We recommend you to not use the initial admin user of Cassandra and provide  a different one with a subset of permissions for each application.
+
+=== Prerequisites
+
+We're gonna use the Cassandra super users to create roles and grant permissions for them.
+To do that, Cassandra requires you to login via username/password authentication  and enable granting in cassandra configuration file.
+
+For example:
+
+----
+echo -e "\nauthenticator: PasswordAuthenticator" >> /etc/cassandra/cassandra.yaml
+echo -e "\nauthorizer: org.apache.cassandra.auth.CassandraAuthorizer" >> /etc/cassandra/cassandra.yaml
+----
+
+=== Prepare Cassandra roles & keyspaces for James
+
+==== Create a role
+
+Have a look at http://cassandra.apache.org/doc/3.11.3/cql/security.html[cassandra documentation] section `CREATE ROLE` for more information
+
+E.g.
+
+----
+CREATE ROLE james_one WITH PASSWORD = 'james_one' AND LOGIN = true;
+----
+
+==== Create a keyspace
+
+Have a look at http://cassandra.apache.org/doc/3.11.3/cql/ddl.html[cassandra documentation] section `CREATE KEYSPACE` for more information
+
+==== Grant permissions on created keyspace to the role
+
+The role to be used by James needs to have full rights on the keyspace  that James is using.
+Assuming the keyspace name is `james_one_keyspace`  and the role be `james_one`.
+
+----
+GRANT CREATE ON KEYSPACE james_one_keyspace TO james_one; // Permission to create tables on the appointed keyspace
+GRANT SELECT ON	KEYSPACE james_one_keyspace TO james_one; // Permission to select from tables on the appointed keyspace
+GRANT MODIFY ON	KEYSPACE james_one_keyspace TO james_one; // Permission to update data in tables on the appointed keyspace
+----
+
+WARNING: The granted role doesn't have the right to create keyspaces,  thus, if you haven't created the keyspace, James server will fail to start  is expected.
+
+*Tips*
+
+Since all of Cassandra roles used by different James are supposed to  have a same set of permissions, you can reduce the works by creating a  base role set like `typical_james_role` with all of necessary permissions.
+After that, with each James, create a new role and grant the `typical_james_role`  to the newly created one.
+Note that, once a base role set is updated (  granting or revoking rights) all granted roles are automatically updated.
+
+E.g.
+
+----
+CREATE ROLE james1 WITH PASSWORD = 'james1' AND LOGIN = true;
+GRANT typical_james_role TO james1;
+
+CREATE ROLE james2 WITH PASSWORD = 'james2' AND LOGIN = true;
+GRANT typical_james_role TO james2;
+----
+
+==== Revoke harmful permissions from the created role
+
+We want a specific role that cannot describe or query the information of other  keyspaces or tables used by another application.
+By default, Cassandra allows every role created to have the right to  describe any keyspace and table.
+There's no configuration that can make  effect on that topic.
+Consequently, you have to accept that your data models  are still being exposed to anyone having credentials to Cassandra.
+
+For more information, have a look at http://cassandra.apache.org/doc/3.11.3/cql/security.html[cassandra documentation] section `REVOKE PERMISSION`.
+
+Except for the case above, the permissions are not auto available for  a specific role unless they are granted by `GRANT` command.
+Therefore,  if you didn't provide more permissions than <<Grant_permissions_on_created_keyspace_to_the_role,granting section>>, there's no need to revoke.
+
+== Cassandra table level configuration
+
+While _Distributed James_ is shipped with default table configuration options, these settings should be refined  depending of your usage.
+
+These options are:
+
+* The https://cassandra.apache.org/doc/latest/operating/compaction.html[compaction algorithms]
+* The https://cassandra.apache.org/doc/latest/operating/bloom_filters.html[bloom filter sizing]
+* The https://cassandra.apache.org/doc/latest/operating/compression.html?highlight=chunk%20size[chunk size]
+* The https://www.datastax.com/blog/2011/04/maximizing-cache-benefit-cassandra[caching options]
+
+The compaction algorithms allow a tradeoff between background IO upon writes and reads.
+We recommend:
+
+* Using *Leveled Compaction Strategy* on read intensive tables subject to updates.
+This limits the count of SStables  being read at the cost of more background IO.
+High garbage collections can be caused by an inappropriate use of Leveled   Compaction Strategy.
+* Otherwise use the default *Size Tiered Compaction Strategy*.
+
+Bloom filters help avoiding unnecessary reads on SSTables.
+This probabilistic data structure can tell an entry absence  from a SSTable, as well as the presence of an entry with an associated probability.
+If a lot of false positives are  noticed, the size of the bloom filters can be increased.
+
+As explained in https://thelastpickle.com/blog/2018/08/08/compression_performance.html[this post], chunk size used  upon compression allows a tradeoff between reads and writes.
+A smaller size will mean decreasing compression, thus it increases data being stored on disk, but allow lower chunks to be read to access data, and will favor reads.
+A bigger  size will mean better compression, thus writing less, but it might imply reading bigger chunks.
+
+Cassandra enables a key cache and a row cache.
+Key cache enables to skip reading the partition index upon reads, thus performing 1 read to the disk instead of 2.
+Enabling this cache is globally advised.
+Row cache stores the entire  row in memory.
+It can be seen as an optimization, but it might actually use memory no longer available for instance for  file system cache.
+We recommend turning it off on modern SSD hardware.
+
+A review of your usage can be conducted using  https://cassandra.apache.org/doc/latest/tools/nodetool/nodetool.html[nodetool] utility.
+For example  `+nodetool tablestats {keyspace}+` allows reviewing the number of SSTables, the read/write ratios, bloom filter efficiency.
+`+nodetool tablehistograms {keyspace}.{table}+` might give insight about read/write performance.
+
+Table level options can be changed using *ALTER TABLE* for example with the  https://cassandra.apache.org/doc/latest/tools/cqlsh.html[cqlsh] utility.
+A full compaction might be  needed in order for the changes to be taken into account.
+
+== Mail Queue
+
+An email queue is a mandatory component of SMTP servers.
+It is a system that creates a queue of emails that are waiting to be processed for delivery.
+Email queuing is a form of Message Queuing -- an asynchronous service-to-service communication.
+A message queue is meant to decouple a producing process from a consuming one.
+An email queue decouples email reception from email processing.
+It allows them to communicate without being connected.
+As such, the queued emails wait for processing until the recipient is available to receive them.
+As James is an Email Server, it also supports mail queue as well.
+
+=== Why Mail Queue is necessary
+
+You might often need to check mail queue to make sure all emails are delivered properly.
+At first, you need to know why email queues get clogged.
+Here are the two core reasons for that:
+
+* Exceeded volume of emails
+
+Some mailbox providers enforce email rate limits on IP addresses.
+The limits are based on the sender reputation.
+If you exceeded this rate and queued too many emails, the delivery speed will decrease.
+
+* Spam-related issues
+
+Another common reason is that your email has been busted by spam filters.
+The filters will let the emails gradually pass to analyze how the rest of the recipients react to the message.
+If there is slow progress, it's okay.
+Your email campaign is being observed and assessed.
+If it's stuck, there could be different reasons including the blockage of your IP address.
+
+=== Why combining Cassandra, RabbitMQ and Object storage for MailQueue
+
+* RabbitMQ ensures the messaging function, and avoids polling.
+* Cassandra enables administrative operations such as browsing, deleting using a time series which might require fine performance tuning (see http://cassandra.apache.org/doc/latest/operating/index.html[Operating Casandra documentation]).
+* Object Storage stores potentially large binary payload.
+
+However the current design do not implement delays.
+Delays allow to define the time a mail have to be living in the  mailqueue before being dequeued and is used for example for exponential wait delays upon remote delivery retries, or SMTP traffic rate limiting.
+
+=== Fine tune configuration for RabbitMQ
+
+In order to adapt mail queue settings to the actual traffic load, an administrator needs to perform fine configuration tunning as explained in https://github.com/apache/james-project/blob/master/src/site/xdoc/server/config-rabbitmq.xml[rabbitmq.properties].
+
+Be aware that `MailQueue::getSize` is currently performing a browse and thus is expensive.
+Size recurring metric  reporting thus introduces performance issues.
+As such, we advise setting `mailqueue.size.metricsEnabled=false`.
+
+=== Managing email queues
+
+Managing an email queue is an easy task if you follow this procedure:
+
+* First, link:manage-webadmin.html#Listing_mail_queues[List mail queues] and link:manage-webadmin.html#Getting_a_mail_queue_details[get a mail queue details].
+* And then link:manage-webadmin.html#Listing_the_mails_of_a_mail_queue[List the mails of a mail queue].
+* If all mails in the mail queue are needed to be delivered you will link:manage-webadmin.html#Flushing_mails_from_a_mail_queue[flush mails from a mail queue].
+
+In case, you need to clear an email queue because there are only spam or trash emails in the email queue you have this procedure to follow:
+
+* All mails from the given mail queue will be deleted with link:manage-webadmin.html#Clearing_a_mail_queue[Clearing a mail queue].
+
+== Updating Cassandra schema version
+
+A schema version indicates you which schema your James server is relying on.
+The schema version number tracks if a migration is required.
+For instance, when the latest schema version is 2, and the current schema version is 1, you might think that you still have data in the deprecated Message table in the database.
+Hence, you need to migrate these messages into the MessageV2 table.
+Once done, you can safely bump the current schema version to 2.
+
+Relying on outdated schema version prevents you to benefit from the newest performance and safety improvements.
+Otherwise, there's something very unexpected in the way we manage cassandra schema: we create new tables without asking the admin about it.
+That means your James version is always using the last tables but may also take into account the old ones if the migration is not done yet.
+
+=== How to detect when we should update Cassandra schema version
+
+When you see in James logs `org.apache.james.modules.mailbox.CassandraSchemaVersionStartUpCheck` showing a warning like `Recommended version is versionX`, you should perform an update of the Cassandra schema version.
+
+Also, we keep track of changes needed when upgrading to a newer version.
+You can read this https://github.com/apache/james-project/blob/master/upgrade-instructions.md[upgrade instructions].
+
+=== How to update Cassandra schema version
+
+These schema updates can be triggered by webadmin using the Cassandra backend.
+Following steps are for updating Cassandra schema version:
+
+* At the very first step, you need to link:manage-webadmin.html#Retrieving_current_Cassandra_schema_version[retrieve current Cassandra schema version]
+* And then, you link:manage-webadmin.html#Retrieving_latest_available_Cassandra_schema_version[retrieve latest available Cassandra schema version] to make sure there is a latest available version
+* Eventually, you can update the current schema version to the one you got with link:manage-webadmin.html#Upgrading_to_the_latest_version[upgrading to the latest version]
+
+Otherwise, if you need to run the migrations to a specific version, you can use link:manage-webadmin.html#Upgrading_to_a_specific_version[Upgrading to a specific version]
+
+== Deleted Messages Vault
+
+Deleted Messages Vault is an interesting feature that will help James users have a chance to:
+
+* retain users deleted messages for some time.
+* restore & export deleted messages by various criteria.
+* permanently delete some retained messages.
+
+If the Deleted Messages Vault is enabled when users delete their mails, and by that we mean when they try to definitely delete them by emptying the trash, James will retain these mails into the Deleted Messages Vault, before an email or a mailbox is going to be deleted.
+And only administrators can interact with this component via link:manage-webadmin.html#deleted-messages-vault[WebAdmin REST APIs].
+
+However, mails are not retained forever as you have to configure a retention period before using it (with one-year retention by default if not defined).
+It's also possible to permanently delete a mail if needed and we recommend the administrator to <<Cleaning_expired_deleted_messages,run it>> in cron job to save storage volume.
+
+=== How to configure deleted messages vault
+
+To setup James with Deleted Messages Vault, you need to follow those steps:
+
+* Enable Deleted Messages Vault by configuring Pre Deletion Hooks.
+* Configuring the retention time for the Deleted Messages Vault.
+
+==== Enable Deleted Messages Vault by configuring Pre Deletion Hooks
+
+You need to configure this hook in https://github.com/apache/james-project/blob/master/dockerfiles/run/guice/cassandra-rabbitmq/destination/conf/listeners.xml[listeners.xml] configuration file.
+More details about configuration & example can be found at http://james.apache.org/server/config-listeners.html[Pre Deletion Hook Configuration]
+
+==== Configuring the retention time for the Deleted Messages Vault
+
+In order to configure the retention time for the Deleted Messages Vault, an administrator needs to perform fine configuration tunning as explained in https://github.com/apache/james-project/blob/master/dockerfiles/run/guice/cassandra/destination/conf/deletedMessageVault.properties[deletedMessageVault.properties].
+Mails are not retained forever as you have to configure a retention period (by `retentionPeriod`) before using it (with one-year retention by default if not defined).
+
+=== Restore deleted messages after deletion
+
+After users deleted their mails and emptied the trash, the admin can use link:manage-webadmin.html#deleted-messages-vault[Restore Deleted Messages] to restore all the deleted mails.
+
+=== Cleaning expired deleted messages
+
+You can delete all deleted messages older than the configured `retentionPeriod` by using link:manage-webadmin.html#deleted-messages-vault[Purge Deleted Messages].
+We recommend calling this API in CRON job on 1st day each month.
diff --git a/docs/modules/migrated/pages/server/manage-webadmin.md.adoc b/docs/modules/migrated/pages/server/manage-webadmin.md.adoc
new file mode 100644
index 0000000..3cb66787
--- /dev/null
+++ b/docs/modules/migrated/pages/server/manage-webadmin.md.adoc
@@ -0,0 +1,3945 @@
+= Web administration for JAMES
+
+The web administration supports for now the CRUD operations on the domains, the users, their mailboxes and their quotas,  managing mail repositories, performing cassandra migrations, and much more, as described in the following sections.
+
+*WARNING*: This API allow authentication only via the use of JWT.
+If not configured with JWT, an administrator should ensure an attacker can not use this API.
+
+By the way, some endpoints are not filtered by authentication.
+Those endpoints are not related to data stored in James, for example: Swagger documentation & James health checks.
+
+Please also note *webadmin* is only enabled with *Guice*.
+You can not use it when using James with *Spring*, as the required injections are not implemented.
+
+In case of any error, the system will return an error message which is json format like this:
+
+----
+{
+    statusCode: <error_code>,
+    type: <error_type>,
+    message: <the_error_message>
+    cause: <the_detail_message_from_throwable>
+}
+----
+
+Also be aware that, in case things go wrong, all endpoints might return a 500 internal error (with a JSON body formatted as exposed above).
+To avoid information duplication, this is ommited on endpoint specific documentation.
+
+Finally, please note that in case of a malformed URL the 400 bad request response will contain an HTML body.
+
+== Navigation menu
+
+* <<HealthCheck,HealthCheck>>
+* <<Administrating_domains,Administrating domains>>
+* <<Administrating_users,Administrating users>>
+* <<Administrating_mailboxes,Administrating mailboxes>>
+* <<Administrating_messages,Administrating messages>>
+* <<Administrating_user_mailboxes,Administrating user mailboxes>>
+* <<Administrating_quotas_by_users,Administrating quotas by users>>
+* <<Administrating_quotas_by_domains,Administrating quotas by domains>>
+* <<Administrating_global_quotas,Administrating global quotas>>
+* <<Cassandra_Schema_upgrades,Cassandra Schema upgrades>>
+* <<Correcting_ghost_mailbox,Correcting ghost mailbox>>
+* <<Creating_address_aliases,Creating address aliases>>
+* <<Creating_domain_mappings,Creating domain mappings>>
+* <<Creating_address_forwards,Creating address forwards>>
+* <<Creating_address_group,Creating address group>>
+* <<Creating_regex_mapping,Creating regex mapping>>
+* <<Address_Mappings,Address Mappings>>
+* <<User_Mappings,User Mappings>>
+* <<Administrating_mail_repositories,Administrating mail repositories>>
+* <<Administrating_mail_queues,Administrating mail queues>>
+* <<Administrating_DLP_Configuration,Administrating DLP Configuration>>
+* <<Administrating_Sieve_quotas,Administrating Sieve quotas>>
+* <<Deleted_Messages_Vault,Deleted Messages Vault>>
+* <<Task_management,Task management>>
+* <<Cassandra_extra_operations,Cassandra extra operations>>
+* <<Event_Dead_Letter,Event Dead Letter>>
+
+== HealthCheck
+
+* <<Check_all_components,Check all components>>
+* <<Check_single_component,Check single component>>
+* <<List_all_health_checks,List all health checks>>
+
+=== Check all components
+
+This endpoint is simple for now and is just returning the http status code corresponding to the state of checks (see below).
+The user has to check in the logs in order to have more information about failing checks.
+
+----
+curl -XGET http://ip:port/healthcheck
+----
+
+Will return a list of healthChecks execution result, with an aggregated result:
+
+----
+{
+  "status": "healthy",
+  "checks": [
+    {
+      "componentName": "Cassandra backend",
+      "escapedComponentName": "Cassandra%20backend",
+      "status": "healthy"
+      "cause": null
+    }
+  ]
+}
+----
+
+*status* field can be:
+
+* *healthy*: Component works normally
+* *degraded*: Component works in degraded mode.
+Some non-critical services may not be working, or latencies are high, for example.
+Cause contains explanations.
+* *unhealthy*: The component is currently not working.
+Cause contains explanations.
+
+Supported health checks include:
+
+* *Cassandra backend*: Cassandra storage.
+Included in Cassandra Guice based products.
+* *ElasticSearch Backend*: ElasticSearch storage.
+Included in Cassandra Guice based products.
+* *EventDeadLettersHealthCheck*: Included in all Guice products.
+* *Guice application lifecycle*: included in all Guice products.
+* *JPA Backend*: JPA storage.
+Included in JPA Guice based products.
+* *MessageFastViewProjection*: included in memory and Cassandra based Guice products.
+Health check of the component storing JMAP properties which are fast to retrieve.
+Those properties are computed in advance from messages and persisted in order to archive a better performance.
+There are some latencies between a source update and its projections updates.
+Incoherency problems arise when reads are performed in this time-window.
+We piggyback the projection update on missed JMAP read in order to decrease the outdated time window for a given entry.
+The health is determined by the ratio of missed projection reads.
+(lower than 10% causes `degraded`)
+* *RabbitMQ backend*: RabbitMQ messaging.
+Included in Distributed Guice based products.
+
+Response codes:
+
+* 200: All checks have answered with a Healthy or Degraded status.
+James services can still be used.
+* 503: At least one check have answered with a Unhealthy status
+
+=== Check single component
+
+Performs a health check for the given component.
+The component is referenced by its URL encoded name.
+
+----
+curl -XGET http://ip:port/healthcheck/checks/Cassandra%20backend
+----
+
+Will return the component's name, the component's escaped name, the health status and a cause.
+
+----
+{
+  "componentName": "Cassandra backend",
+  "escapedComponentName": "Cassandra%20backend",
+  "status": "healthy"
+  "cause": null
+}
+----
+
+Response codes:
+
+* 200: The check has answered with a Healthy or Degraded status.
+* 404: A component with the given name was not found.
+* 503: The check has anwered with a Unhealthy status.
+
+=== List all health checks
+
+This endpoint lists all the available health checks.
+
+----
+curl -XGET http://ip:port/healthcheck/checks
+----
+
+Will return the list of all available health checks.
+
+----
+[
+    {
+        "componentName": "Cassandra backend",
+        "escapedComponentName": "Cassandra%20backend"
+    }
+]
+----
+
+Response codes:
+
+* 200: List of available health checks
+
+== Administrating domains
+
+* <<Create_a_domain,Create a domain>>
+* <<Delete_a_domain,Delete a domain>>
+* <<Test_if_a_domain_exists,Test if a domain exists>>
+* <<Get_the_list_of_domains,Get the list of domains>>
+* <<Get_the_list_of_aliases_for_a_domain,Get the list of aliases for a domain>>
+* <<Create_an_alias_for_a_domain,Create an alias for a domain>>
+* <<Delete_an_alias_for_a_domain,Delete an alias for a domain>>
+
+=== Create a domain
+
+----
+curl -XPUT http://ip:port/domains/domainToBeCreated
+----
+
+Resource name domainToBeCreated:
+
+* can not be null or empty
+* can not contain '@'
+* can not be more than 255 characters
+* can not contain '/'
+
+Response codes:
+
+* 204: The domain was successfully added
+* 400: The domain name is invalid
+
+=== Delete a domain
+
+----
+curl -XDELETE http://ip:port/domains/{domainToBeDeleted}
+----
+
+NOTE: Deletion of an auto-detected domain, default domain or of an auto-detected ip is not supported.
+We encourage you instead to review  your https://james.apache.org/server/config-domainlist.html[domain list configuration].
+
+Response codes:
+
+* 204: The domain was successfully removed
+
+=== Test if a domain exists
+
+----
+curl -XGET http://ip:port/domains/{domainName}
+----
+
+Response codes:
+
+* 204: The domain exists
+* 404: The domain does not exist
+
+=== Get the list of domains
+
+----
+curl -XGET http://ip:port/domains
+----
+
+Possible response:
+
+----
+["domain1", "domain2"]
+----
+
+Response codes:
+
+* 200: The domain list was successfully retrieved
+
+=== Get the list of aliases for a domain
+
+----
+curl -XGET http://ip:port/domains/destination.domain.tld/aliases
+----
+
+Possible response:
+
+----
+[
+  {"source": "source1.domain.tld"},
+  {"source": "source2.domain.tld"}
+]
+----
+
+When sending an email to an email address having `source1.domain.tld` or `source2.domain.tld` as a domain part (example: `user@source1.domain.tld`), then the domain part will be rewritten into destination.domain.tld (so into `user@destination.domain.tld`).
+
+Response codes:
+
+* 200: The domain aliases was successfully retrieved
+* 400: destination.domain.tld has an invalid syntax
+* 404: destination.domain.tld is not part of handled domains and does not have local domains as aliases.
+
+=== Create an alias for a domain
+
+To create a domain alias execute the following query:
+
+----
+curl -XPUT http://ip:port/domains/destination.domain.tld/aliases/source.domain.tld
+----
+
+When sending an email to an email address having `source.domain.tld` as a domain part (example: `user@source.domain.tld`), then the domain part will be rewritten into `destination.domain.tld` (so into `user@destination.domain.tld`).
+
+Response codes:
+
+* 204: The redirection now exists
+* 400: `source.domain.tld` or `destination.domain.tld` have an invalid syntax
+* 400: `source, domain` and `destination domain` are the same
+* 404: `source.domain.tld` are not part of handled domains.
+
+=== Delete an alias for a domain
+
+To delete a domain alias execute the following query:
+
+----
+curl -XDELETE http://ip:port/domains/destination.domain.tld/aliases/source.domain.tld
+----
+
+When sending an email to an email address having `source.domain.tld` as a domain part (example: `user@source.domain.tld`), then the domain part will be rewritten into `destination.domain.tld` (so into `user@destination.domain.tld`).
+
+Response codes:
+
+* 204: The redirection now no longer exists
+* 400: `source.domain.tld` or destination.domain.tld have an invalid syntax
+* 400: source, domain and destination domain are the same
+* 404: `source.domain.tld` are not part of handled domains.
+
+== Administrating users
+
+* <<Create_a_user,Create a user>>
+* <<Testing_a_user_existence,Testing a user existence>>
+* <<Updating_a_user_password,Updating a user password>>
+* <<Deleting_a_user,Deleting a user>>
+* <<Retrieving_the_user_list,Retrieving the user list>>
+* link:Retrieving_the_list_of_allowed_From_headers_for_a_given_user[Retrieving the list of allowed `From` headers for a given user]
+
+=== Create a user
+
+----
+curl -XPUT http://ip:port/users/usernameToBeUsed \
+  -d '{"password":"passwordToBeUsed"}' \
+  -H "Content-Type: application/json"
+----
+
+Resource name usernameToBeUsed representing valid users,  hence it should match the criteria at link:/server/config-users.html[User Repositories documentation]
+
+Response codes:
+
+* 204: The user was successfully created
+* 400: The user name or the payload is invalid
+
+NOTE: if the user exists already, its password will be updated.
+
+###Testing a user existence
+
+----
+curl -XHEAD http://ip:port/users/usernameToBeUsed
+----
+
+Resource name "usernameToBeUsed" represents a valid user, hence it should match the criteria at link:/server/config-users.html[User Repositories documentation]
+
+Response codes:
+
+* 200: The user exists
+* 400: The user name is invalid
+* 404: The user does not exist
+
+=== Updating a user password
+
+Same than Create, but a user need to exist.
+
+If the user do not exist, then it will be created.
+
+=== Deleting a user
+
+----
+curl -XDELETE http://ip:port/users/{userToBeDeleted}
+----
+
+Response codes:
+
+* 204: The user was successfully deleted
+
+=== Retrieving the user list
+
+----
+curl -XGET http://ip:port/users
+----
+
+The answer looks like:
+
+----
+[{"username":"username@domain-jmapauthentication.tld"},{"username":"username@domain.tld"}]
+----
+
+Response codes:
+
+* 200: The user name list was successfully retrieved
+
+=== Retrieving the list of allowed `From` headers for a given user
+
+----
+curl -XGET http://ip:port/users/givenUser/allowedFromHeaders
+----
+
+The answer looks like:
+
+----
+["user@domain.tld","alias@domain.tld"]
+----
+
+Response codes:
+
+* 200: The list was successfully retrieved
+* 400: The user is invalid
+* 404: The user is unknown
+
+== Administrating mailboxes
+
+=== All mailboxes
+
+Several actions can be performed on the server mailboxes.
+
+Request pattern is:
+
+----
+curl -XPOST /mailboxes?action={action1},...
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+The kind of task scheduled depends on the action parameter.
+See below for details.
+
+==== Fixing mailboxes inconsistencies
+
+This task is only available on top of Guice Cassandra products.
+
+----
+curl -XPOST /mailboxes?task=SolveInconsistencies
+----
+
+Will schedule a task for fixing inconsistencies for the mailbox deduplicated object stored in Cassandra.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+The `I-KNOW-WHAT-I-M-DOING` header is mandatory (you can read more information about it in the warning section below).
+
+The scheduled task will have the following type `solve-mailbox-inconsistencies` and the following `additionalInformation`:
+
+----
+{
+  "type":"solve-mailbox-inconsistencies",
+  "processedMailboxEntries": 3,
+  "processedMailboxPathEntries": 3,
+  "fixedInconsistencies": 2,
+  "errors": 1,
+  "conflictingEntries":[{
+    "mailboxDaoEntry":{
+      "mailboxPath":"#private:user:mailboxName",
+      "mailboxId":"464765a0-e4e7-11e4-aba4-710c1de3782b"
+    }," +
+    "mailboxPathDaoEntry":{
+      "mailboxPath":"#private:user:mailboxName2",
+      "mailboxId":"464765a0-e4e7-11e4-aba4-710c1de3782b"
+    }
+  }]
+}
+----
+
+Note that conflicting entry inconsistencies will not be fixed and will require to explicitly use  <<correcting-ghost-mailbox,ghost mailbox>> endpoint in order to merge the conflicting mailboxes and prevent any message loss.
+
+*WARNING*: this task can cancel concurrently running legitimate user operations upon dirty read.
+As such this task  should be run offline.
+
+A dirty read is when data is read between the two writes of the denormalization operations (no isolation).
+
+In order to ensure being offline, stop the traffic on SMTP, JMAP and IMAP ports, for example via re-configuration or  firewall rules.
+
+Due to all of those risks, a `I-KNOW-WHAT-I-M-DOING` header should be positioned to `ALL-SERVICES-ARE-OFFLINE` in order  to prevent accidental calls.
+
+==== Recomputing mailbox counters
+
+This task is only available on top of Guice Cassandra products.
+
+----
+curl -XPOST /mailboxes?task=RecomputeMailboxCounters
+----
+
+Will recompute counters (unseen & total count) for the mailbox object stored in Cassandra.
+
+Cassandra maintains a per mailbox projection for message count and unseen message count.
+As with any projection, it can  go out of sync, leading to inconsistent results being returned to the client.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+The scheduled task will have the following type `recompute-mailbox-counters` and the following `additionalInformation`:
+
+----
+{
+  "type":"recompute-mailbox-counters",
+  "processedMailboxes": 3,
+  "failedMailboxes": ["464765a0-e4e7-11e4-aba4-710c1de3782b"]
+}
+----
+
+Note that conflicting inconsistencies entries will not be fixed and will require to explicitly use  <<correcting-ghost-mailbox,ghost mailbox>> endpoint in order to merge the conflicting mailboxes and prevent any message loss.
+
+*WARNING*: this task do not take into account concurrent modifications upon a single mailbox counter recomputation.
+Rerunning the task will _eventually_ provide the consistent result.
+As such we advise to run this task offline.
+
+In order to ensure being offline, stop the traffic on SMTP, JMAP and IMAP ports, for example via re-configuration or  firewall rules.
+
+`trustMessageProjection` query parameter can be set to `true`.
+Content of `messageIdTable` (listing messages by their  mailbox context) table will be trusted and not compared against content of `imapUidTable` table (listing messages by their messageId mailbox independent identifier).
+This will result in a better performance running the task at the cost of safety in the face of message denormalization inconsistencies.
+
+Defaults to false, which generates  additional checks.
+You can read  https://github.com/apache/james-project/blob/master/src/adr/0022-cassandra-message-inconsistency.md[this ADR] to  better understand the message projection and how it can become inconsistent.
+
+==== Recomputing Global JMAP fast message view projection
+
+This action is only available for backends supporting JMAP protocol.
+
+Message fast view projection stores message properties expected to be fast to fetch but are actually expensive to compute, in order for GetMessages operation to be fast to execute for these properties.
+
+These projection items are asynchronously computed on mailbox events.
+
+You can force the full projection recomputation by calling the following endpoint:
+
+----
+curl -XPOST /mailboxes?task=recomputeFastViewProjectionItems
+----
+
+Will schedule a task for recomputing the fast message view projection for all mailboxes.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate at which messages should be processed, per second.
+Defaults to 10.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameters.
+
+Example:
+
+----
+curl -XPOST /mailboxes?task=recomputeFastViewProjectionItems&messagesPerSecond=20
+----
+
+The scheduled task will have the following type `RecomputeAllFastViewProjectionItemsTask` and the following `additionalInformation`:
+
+----
+{
+  "type":"RecomputeAllPreviewsTask",
+  "processedUserCount": 3,
+  "processedMessageCount": 3,
+  "failedUserCount": 2,
+  "failedMessageCount": 1,
+  "runningOptions": {
+    "messagesPerSecond":20
+  }
+}
+----
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+==== ReIndexing action
+
+These tasks are only available on top of Guice Cassandra products or Guice JPA products.
+They are not part of Memory Guice product.
+
+Be also aware of the limits of this API:
+
+WARNING: During the re-indexing, the result of search operations might be altered.
+
+WARNING: Canceling this task should be considered unsafe as it will leave the currently reIndexed mailbox as partially indexed.
+
+WARNING: While we have been trying to reduce the inconsistency window to a maximum (by keeping track of ongoing events), concurrent changes done during the reIndexing might be ignored.
+
+The following actions can be performed:
+
+* <<ReIndexing_all_mails,ReIndexing all mails>>
+* <<Fixing_previously_failed_ReIndexing,Fixing previously failed ReIndexing>>
+
+===== ReIndexing all mails
+
+----
+curl -XPOST http://ip:port/mailboxes?task=reIndex
+----
+
+Will schedule a task for reIndexing all the mails stored on this James server.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate at which messages should be processed per second.
+Default is 50.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameter.
+
+An admin can also specify the reindexing mode it wants to use when running the task:
+
+* `mode` the reindexing mode used.
+There are 2 modes for the moment:
+ ** `rebuildAll` allows to rebuild all indexes.
+This is the default mode.
+ ** `fixOutdated` will check for outdated indexed document and reindex only those.
+
+This optional parameter must be passed as query parameter.
+
+It's good to note as well that there is a limitation with the `fixOutdated` mode.
+As we first collect metadata of  stored messages to compare them with the ones in the index, a failed `expunged` operation might not be well corrected (as the message might not exist anymore but still be indexed).
+
+Example:
+
+`+curl -XPOST http://ip:port/mailboxes?task=reIndex&messagesPerSecond=200&mode=rebuildAll+`
+
+The scheduled task will have the following type `full-reindexing` and the following `additionalInformation`:
+
+----
+{
+  "type":"full-reindexing",
+  "runningOptions":{
+    "messagesPerSecond":200,
+    "mode":"REBUILD_ALL"
+  },
+  "successfullyReprocessedMailCount":18,
+  "failedReprocessedMailCount": 3,
+  "mailboxFailures": ["12", "23" ],
+  "messageFailures": [
+   {
+     "mailboxId": "1",
+      "uids": [1, 36]
+   }]
+}
+----
+
+===== Fixing previously failed ReIndexing
+
+Will schedule a task for reIndexing all the mails which had failed to be indexed from the ReIndexingAllMails task.
+
+Given `bbdb69c9-082a-44b0-a85a-6e33e74287a5` being a `taskId` generated for a reIndexing tasks
+
+----
+curl -XPOST 'http://ip:port/mailboxes?task=reIndex&reIndexFailedMessagesOf=bbdb69c9-082a-44b0-a85a-6e33e74287a5'
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate at which messages should be processed per second.
+Default is 50.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameter.
+
+An admin can also specify the reindexing mode it wants to use when running the task:
+
+* `mode` the reindexing mode used.
+There are 2 modes for the moment:
+ ** `rebuildAll` allows to rebuild all indexes.
+This is the default mode.
+ ** `fixOutdated` will check for outdated indexed document and reindex only those.
+
+This optional parameter must be passed as query parameter.
+
+It's good to note as well that there is a limitation with the `fixOutdated` mode.
+As we first collect metadata of  stored messages to compare them with the ones in the index, a failed `expunged` operation might not be well corrected (as the message might not exist anymore but still be indexed).
+
+Example:
+
+----
+curl -XPOST http://ip:port/mailboxes?task=reIndex&reIndexFailedMessagesOf=bbdb69c9-082a-44b0-a85a-6e33e74287a5&messagesPerSecond=200&mode=rebuildAll
+----
+
+The scheduled task will have the following type `error-recovery-indexation` and the following `additionalInformation`:
+
+----
+{
+  "type":"error-recovery-indexation"
+  "runningOptions":{
+    "messagesPerSecond":200,
+    "mode":"REBUILD_ALL"
+  },
+  "successfullyReprocessedMailCount":18,
+  "failedReprocessedMailCount": 3,
+  "mailboxFailures": ["12", "23" ],
+  "messageFailures": [{
+     "mailboxId": "1",
+      "uids": [1, 36]
+   }]
+}
+----
+
+=== Single mailbox
+
+==== ReIndexing a mailbox mails
+
+This task is only available on top of Guice Cassandra products or Guice JPA products.
+It is not part of Memory Guice product.
+
+----
+curl -XPOST http://ip:port/mailboxes/{mailboxId}?task=reIndex
+----
+
+Will schedule a task for reIndexing all the mails in one mailbox.
+
+Note that 'mailboxId' path parameter needs to be a (implementation dependent) valid mailboxId.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate at which messages should be processed per second.
+Default is 50.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameter.
+
+An admin can also specify the reindexing mode it wants to use when running the task:
+
+* `mode` the reindexing mode used.
+There are 2 modes for the moment:
+ ** `rebuildAll` allows to rebuild all indexes.
+This is the default mode.
+ ** `fixOutdated` will check for outdated indexed document and reindex only those.
+
+This optional parameter must be passed as query parameter.
+
+It's good to note as well that there is a limitation with the `fixOutdated` mode.
+As we first collect metadata of  stored messages to compare them with the ones in the index, a failed `expunged` operation might not be well corrected (as the message might not exist anymore but still be indexed).
+
+Example:
+
+----
+curl -XPOST http://ip:port/mailboxes/{mailboxId}?task=reIndex&messagesPerSecond=200&mode=fixOutdated
+----
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+The scheduled task will have the following type `mailbox-reindexing` and the following `additionalInformation`:
+
+----
+{
+  "type":"mailbox-reindexing",
+  "runningOptions":{
+    "messagesPerSecond":200,
+    "mode":"FIX_OUTDATED"
+  },
+  "mailboxId":"{mailboxId}",
+  "successfullyReprocessedMailCount":18,
+  "failedReprocessedMailCount": 3,
+  "mailboxFailures": ["12"],
+  "messageFailures": [
+   {
+     "mailboxId": "1",
+      "uids": [1, 36]
+   }]
+}
+----
+
+WARNING: During the re-indexing, the result of search operations might be altered.
+
+WARNING: Canceling this task should be considered unsafe as it will leave the currently reIndexed mailbox as partially indexed.
+
+WARNING: While we have been trying to reduce the inconsistency window to a maximum (by keeping track of ongoing events), concurrent changes done during the reIndexing might be ignored.
+
+==== ReIndexing a single mail
+
+This task is only available on top of Guice Cassandra products or Guice JPA products.
+It is not part of Memory Guice product.
+
+----
+curl -XPOST http://ip:port/mailboxes/{mailboxId}/uid/{uid}?task=reIndex
+----
+
+Will schedule a task for reIndexing a single email.
+
+Note that 'mailboxId' path parameter needs to be a (implementation dependent) valid mailboxId.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+The scheduled task will have the following type `message-reindexing` and the following `additionalInformation`:
+
+----
+{
+  "mailboxId":"{mailboxId}",
+  "uid":18
+}
+----
+
+WARNING: During the re-indexing, the result of search operations might be altered.
+
+WARNING: Canceling this task should be considered unsafe as it will leave the currently reIndexed mailbox as partially indexed.
+
+== Administrating Messages
+
+=== ReIndexing a single mail by messageId
+
+This task is only available on top of Guice Cassandra products or Guice JPA products.
+It is not part of Memory Guice product.
+
+----
+curl -XPOST http://ip:port/messages/{messageId}?task=reIndex
+----
+
+Will schedule a task for reIndexing a single email in all the mailboxes containing it.
+
+Note that 'messageId' path parameter needs to be a (implementation dependent) valid messageId.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+The scheduled task will have the following type `messageId-reindexing` and the following `additionalInformation`:
+
+----
+{
+  "messageId":"18"
+}
+----
+
+WARNING: During the re-indexing, the result of search operations might be altered.
+
+=== Fixing message inconsistencies
+
+This task is only available on top of Guice Cassandra products.
+
+----
+curl -XPOST /messages?task=SolveInconsistencies
+----
+
+Will schedule a task for fixing message inconsistencies created by the message denormalization process.
+
+Messages are denormalized and stored in separated data tables in Cassandra, so they can be accessed  by their unique identifier or mailbox identifier & local mailbox identifier through different protocols.
+
+Failure in the denormalization process will lead to inconsistencies, for example:
+
+----
+BOB receives a message
+The denormalization process fails
+BOB can read the message via JMAP
+BOB cannot read the message via IMAP
+
+BOB marks a message as SEEN
+The denormalization process fails
+The message is SEEN via JMAP
+The message is UNSEEN via IMAP
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate of messages to be processed per second.
+Default is 100.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameter.
+
+An admin can also specify the reindexing mode it wants to use when running the task:
+
+* `mode` the reindexing mode used.
+There are 2 modes for the moment:
+ ** `rebuildAll` allows to rebuild all indexes.
+This is the default mode.
+ ** `fixOutdated` will check for outdated indexed document and reindex only those.
+
+This optional parameter must be passed as query parameter.
+
+It's good to note as well that there is a limitation with the `fixOutdated` mode.
+As we first collect metadata of  stored messages to compare them with the ones in the index, a failed `expunged` operation might not be well corrected (as the message might not exist anymore but still be indexed).
+
+Example:
+
+----
+curl -XPOST /messages?task=SolveInconsistencies&messagesPerSecond=200&mode=rebuildAll
+----
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+The scheduled task will have the following type `solve-message-inconsistencies` and the following `additionalInformation`:
+
+----
+{
+  "type":"solve-message-inconsistencies",
+  "timestamp":"2007-12-03T10:15:30Z",
+  "processedImapUidEntries": 2,
+  "processedMessageIdEntries": 1,
+  "addedMessageIdEntries": 1,
+  "updatedMessageIdEntries": 0,
+  "removedMessageIdEntries": 1,
+  "runningOptions":{
+    "messagesPerSecond": 200,
+    "mode":"REBUILD_ALL"
+  },
+  "fixedInconsistencies": [
+    {
+      "mailboxId": "551f0580-82fb-11ea-970e-f9c83d4cf8c2",
+      "messageId": "d2bee791-7e63-11ea-883c-95b84008f979",
+      "uid": 1
+    },
+    {
+      "mailboxId": "551f0580-82fb-11ea-970e-f9c83d4cf8c2",
+      "messageId": "d2bee792-7e63-11ea-883c-95b84008f979",
+      "uid": 2
+    }
+  ],
+  "errors": [
+    {
+      "mailboxId": "551f0580-82fb-11ea-970e-f9c83d4cf8c2",
+      "messageId": "ffffffff-7e63-11ea-883c-95b84008f979",
+      "uid": 3
+    }
+  ]
+}
+----
+
+User actions concurrent to the inconsistency fixing task could result in concurrency issues.
+New inconsistencies  could be created.
+
+However the source of truth will not be impacted, hence rerunning the task will eventually fix all issues.
+
+This task could be run safely online and can be scheduled on a recurring basis outside of peak traffic  by an admin to ensure Cassandra message consistency.
+
+== Administrating user mailboxes
+
+* <<Creating_a_mailbox,Creating a mailbox>>
+* <<Deleting_a_mailbox_and_its_children,Deleting a mailbox and its children>>
+* <<Testing_existence_of_a_mailbox,Testing existence of a mailbox>>
+* <<Listing_user_mailboxes,Listing user mailboxes>>
+* <<Deleting_user_mailboxes,Deleting user mailboxes>>
+* <<Exporting_user_mailboxes,Exporting user mailboxes>>
+* <<ReIndexing_a_user_mails,ReIndexing a user mails>>
+* <<Recomputing_User_JMAP_fast_message_view_projection,Recomputing User JMAP fast message view projection>>
+
+=== Creating a mailbox
+
+----
+curl -XPUT http://ip:port/users/{usernameToBeUsed}/mailboxes/{mailboxNameToBeCreated}
+----
+
+Resource name `usernameToBeUsed` should be an existing user Resource name `mailboxNameToBeCreated` should not be empty, nor contain # & % * characters.
+
+Response codes:
+
+* 204: The mailbox now exists on the server
+* 400: Invalid mailbox name
+* 404: The user name does not exist
+
+To create nested mailboxes, for instance a work mailbox inside the INBOX mailbox, people should use the . separator.
+The sample query is:
+
+----
+curl -XDELETE http://ip:port/users/{usernameToBeUsed}/mailboxes/INBOX.work
+----
+
+=== Deleting a mailbox and its children
+
+----
+curl -XDELETE http://ip:port/users/{usernameToBeUsed}/mailboxes/{mailboxNameToBeDeleted}
+----
+
+Resource name `usernameToBeUsed` should be an existing user Resource name `mailboxNameToBeDeleted` should not be empty
+
+Response codes:
+
+* 204: The mailbox now does not exist on the server
+* 400: Invalid mailbox name
+* 404: The user name does not exist
+
+=== Testing existence of a mailbox
+
+----
+curl -XGET http://ip:port/users/{usernameToBeUsed}/mailboxes/{mailboxNameToBeTested}
+----
+
+Resource name `usernameToBeUsed` should be an existing user Resource name `mailboxNameToBeTested` should not be empty
+
+Response codes:
+
+* 204: The mailbox exists
+* 400: Invalid mailbox name
+* 404: The user name does not exist, the mailbox does not exist
+
+=== Listing user mailboxes
+
+----
+curl -XGET http://ip:port/users/{usernameToBeUsed}/mailboxes
+----
+
+The answer looks like:
+
+----
+[{"mailboxName":"INBOX"},{"mailboxName":"outbox"}]
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+Response codes:
+
+* 200: The mailboxes list was successfully retrieved
+* 404: The user name does not exist
+
+=== Deleting user mailboxes
+
+----
+curl -XDELETE http://ip:port/users/{usernameToBeUsed}/mailboxes
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+Response codes:
+
+* 204: The user do not have mailboxes anymore
+* 404: The user name does not exist
+
+=== Exporting user mailboxes
+
+----
+curl -XPOST http://ip:port/users/{usernameToBeUsed}/mailboxes?action=export
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned
+* 404: The user name does not exist
+
+The scheduled task will have the following type `MailboxesExportTask` and the following `additionalInformation`:
+
+----
+{
+  "type":"MailboxesExportTask",
+  "timestamp":"2007-12-03T10:15:30Z",
+  "username": "user",
+  "stage": "STARTING"
+}
+----
+
+=== ReIndexing a user mails
+
+----
+curl -XPOST http://ip:port/users/{usernameToBeUsed}/mailboxes?task=reIndex
+----
+
+Will schedule a task for reIndexing all the mails in "user@domain.com" mailboxes (encoded above).
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate at which messages should be processed per second.
+Default is 50.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameter.
+
+An admin can also specify the reindexing mode it wants to use when running the task:
+
+* `mode` the reindexing mode used.
+There are 2 modes for the moment:
+ ** `rebuildAll` allows to rebuild all indexes.
+This is the default mode.
+ ** `fixOutdated` will check for outdated indexed document and reindex only those.
+
+This optional parameter must be passed as query parameter.
+
+It's good to note as well that there is a limitation with the `fixOutdated` mode.
+As we first collect metadata of  stored messages to compare them with the ones in the index, a failed `expunged` operation might not be well corrected (as the message might not exist anymore but still be indexed).
+
+Example:
+
+----
+curl -XPOST http://ip:port/users/{usernameToBeUsed}/mailboxes?task=reIndex&messagesPerSecond=200&mode=fixOutdated
+----
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+
+The scheduled task will have the following type `user-reindexing` and the following `additionalInformation`:
+
+----
+{
+  "type":"user-reindexing",
+  "runningOptions":{
+    "messagesPerSecond":200,
+    "mode":"FIX_OUTDATED"
+  },
+  "user":"user@domain.com",
+  "successfullyReprocessedMailCount":18,
+  "failedReprocessedMailCount": 3,
+  "mailboxFailures": ["12", "23" ],
+  "messageFailures": [
+   {
+     "mailboxId": "1",
+      "uids": [1, 36]
+   }]
+}
+----
+
+WARNING: During the re-indexing, the result of search operations might be altered.
+
+WARNING: Canceling this task should be considered unsafe as it will leave the currently reIndexed mailbox as partially indexed.
+
+WARNING: While we have been trying to reduce the inconsistency window to a maximum (by keeping track of ongoing events), concurrent changes done during the reIndexing might be ignored.
+
+=== Recomputing User JMAP fast message view projection
+
+This action is only available for backends supporting JMAP protocol.
+
+Message fast view projection stores message properties expected to be fast to fetch but are actually expensive to compute, in order for GetMessages operation to be fast to execute for these properties.
+
+These projection items are asynchronously computed on mailbox events.
+
+You can force the full projection recomputation by calling the following endpoint:
+
+----
+curl -XPOST /users/{usernameToBeUsed}/mailboxes?task=recomputeFastViewProjectionItems
+----
+
+Will schedule a task for recomputing the fast message view projection for all mailboxes of `usernameToBeUsed`.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `messagesPerSecond` rate at which messages should be processed, per second.
+Defaults to 10.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameters.
+
+Example:
+
+----
+curl -XPOST /mailboxes?task=recomputeFastViewProjectionItems&messagesPerSecond=20
+----
+
+The scheduled task will have the following type `RecomputeUserFastViewProjectionItemsTask` and the following `additionalInformation`:
+
+----
+{
+  "type":"RecomputeUserFastViewProjectionItemsTask",
+  "username": "{usernameToBeUsed}",
+  "processedMessageCount": 3,
+  "failedMessageCount": 1,
+  "runningOptions": {
+    "messagesPerSecond":20
+  }
+}
+----
+
+Response codes:
+
+* 201: Success.
+Corresponding task id is returned.
+* 400: Error in the request.
+Details can be found in the reported error.
+* 404: User not found.
+
+== Administrating quotas by users
+
+* <<Getting_the_quota_for_a_user,Getting the quota for a user>>
+* <<Updating_the_quota_for_a_user,Updating the quota for a user>>
+* <<Getting_the_quota_count_for_a_user,Getting the quota count for a user>>
+* <<Updating_the_quota_count_for_a_user,Updating the quota count for a user>>
+* <<Deleting_the_quota_count_for_a_user,Deleting the quota count for a user>>
+* <<Getting_the_quota_size_for_a_user,Getting the quota size for a user>>
+* <<Updating_the_quota_size_for_a_user,Updating the quota size for a user>>
+* <<Deleting_the_quota_size_for_a_user,Deleting the quota size for a user>>
+* <<Searching_user_by_quota_ratio,Searching user by quota ratio>>
+* <<Recomputing_current_quotas_for_users,Recomputing current quotas for users>>
+
+=== Getting the quota for a user
+
+----
+curl -XGET http://ip:port/quota/users/{usernameToBeUsed}
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+The answer is the details of the quota of that user.
+
+----
+{
+  "global": {
+    "count":252,
+    "size":242
+  },
+  "domain": {
+    "count":152,
+    "size":142
+  },
+  "user": {
+    "count":52,
+    "size":42
+  },
+  "computed": {
+    "count":52,
+    "size":42
+  },
+  "occupation": {
+    "size":13,
+    "count":21,
+    "ratio": {
+      "size":0.25,
+      "count":0.5,
+      "max":0.5
+    }
+  }
+}
+----
+
+* The `global` entry represent the quota limit allowed on this James server.
+* The `domain` entry represent the quota limit allowed for the user of that domain.
+* The `user` entry represent the quota limit allowed for this specific user.
+* The `computed` entry represent the quota limit applied for this user, resolved from the upper values.
+* The `occupation` entry represent the occupation of the quota for this user.
+This includes used count and size as well as occupation ratio (used / limit).
+
+Note that `quota` object can contain a fixed value, an empty value (null) or an unlimited value (-1):
+
+----
+{"count":52,"size":42}
+
+{"count":null,"size":null}
+
+{"count":52,"size":-1}
+----
+
+Response codes:
+
+* 200: The user's quota was successfully retrieved
+* 404: The user does not exist
+
+=== Updating the quota for a user
+
+----
+curl -XPUT http://ip:port/quota/users/{usernameToBeUsed}
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+The body can contain a fixed value, an empty value (null) or an unlimited value (-1):
+
+----
+{"count":52,"size":42}
+
+{"count":null,"size":null}
+
+{"count":52,"size":-1}
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+* 404: The user does not exist
+
+=== Getting the quota count for a user
+
+----
+curl -XGET http://ip:port/quota/users/{usernameToBeUsed}/count
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+The answer looks like:
+
+----
+52
+----
+
+Response codes:
+
+* 200: The user's quota was successfully retrieved
+* 204: No quota count limit is defined at the user level for this user
+* 404: The user does not exist
+
+=== Updating the quota count for a user
+
+----
+curl -XPUT http://ip:port/quota/users/{usernameToBeUsed}/count
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+The body can contain a fixed value or an unlimited value (-1):
+
+----
+52
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+* 404: The user does not exist
+
+=== Deleting the quota count for a user
+
+----
+curl -XDELETE http://ip:port/quota/users/{usernameToBeUsed}/count
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+Response codes:
+
+* 204: The quota has been updated to unlimited value.
+* 404: The user does not exist
+
+=== Getting the quota size for a user
+
+----
+curl -XGET http://ip:port/quota/users/{usernameToBeUsed}/size
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+The answer looks like:
+
+----
+52
+----
+
+Response codes:
+
+* 200: The user's quota was successfully retrieved
+* 204: No quota size limit is defined at the user level for this user
+* 404: The user does not exist
+
+=== Updating the quota size for a user
+
+----
+curl -XPUT http://ip:port/quota/users/{usernameToBeUsed}/size
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+The body can contain a fixed value or an unlimited value (-1):
+
+----
+52
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+* 404: The user does not exist
+
+=== Deleting the quota size for a user
+
+----
+curl -XDELETE http://ip:port/quota/users/{usernameToBeUsed}/size
+----
+
+Resource name `usernameToBeUsed` should be an existing user
+
+Response codes:
+
+* 204: The quota has been updated to unlimited value.
+* 404: The user does not exist
+
+=== Searching user by quota ratio
+
+----
+curl -XGET 'http://ip:port/quota/users?minOccupationRatio=0.8&maxOccupationRatio=0.99&limit=100&offset=200&domain=domain.com'
+----
+
+Will return:
+
+----
+[
+  {
+    "username":"user@domain.com",
+    "detail": {
+      "global": {
+        "count":252,
+        "size":242
+      },
+      "domain": {
+        "count":152,
+        "size":142
+      },
+      "user": {
+        "count":52,
+        "size":42
+      },
+      "computed": {
+        "count":52,
+        "size":42
+      },
+      "occupation": {
+        "size":48,
+        "count":21,
+        "ratio": {
+          "size":0.9230,
+          "count":0.5,
+          "max":0.9230
+        }
+      }
+    }
+  },
+  ...
+]
+----
+
+Where:
+
+* *minOccupationRatio* is a query parameter determining the minimum occupation ratio of users to be returned.
+* *maxOccupationRatio* is a query parameter determining the maximum occupation ratio of users to be returned.
+* *domain* is a query parameter determining the domain of users to be returned.
+* *limit* is a query parameter determining the maximum number of users to be returned.
+* *offset* is a query parameter determining the number of users to skip.
+
+Please note that users are alphabetically ordered on username.
+
+The response is a list of usernames, with attached quota details as defined <<getting-the-quota-for-a-user,here>>.
+
+Response codes:
+
+* 200: List of users had successfully been returned.
+* 400: Validation issues with parameters
+
+=== Recomputing current quotas for users
+
+This task is available on top of Cassandra & JPA products.
+
+----
+curl -XPOST /quota/users?task=RecomputeCurrentQuotas
+----
+
+Will recompute current quotas (count and size) for all users stored in James.
+
+James maintains per quota a projection for current quota count and size.
+As with any projection, it can  go out of sync, leading to inconsistent results being returned to the client.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+An admin can specify the concurrency that should be used when running the task:
+
+* `usersPerSecond` rate at which users quotas should be reprocessed, per second.
+Defaults to 1.
+
+This optional parameter must have a strictly positive integer as a value and be passed as query parameters.
+
+Example:
+
+----
+curl -XPOST /quota/users?task=RecomputeCurrentQuotas&usersPerSecond=20
+----
+
+The scheduled task will have the following type `recompute-current-quotas` and the following `additionalInformation`:
+
+----
+{
+  "type":"recompute-current-quotas",
+  "processedQuotaRoots": 3,
+  "failedQuotaRoots": ["#private&bob@localhost"],
+  "runningOptions": {
+    "usersPerSecond":20
+  }
+}
+----
+
+*WARNING*: this task do not take into account concurrent modifications upon a single current quota recomputation.
+Rerunning the task will _eventually_ provide the consistent result.
+
+== Administrating quotas by domains
+
+* <<Getting_the_quota_for_a_domain,Getting the quota for a domain>>
+* <<Updating_the_quota_for_a_domain,Updating the quota for a domain>>
+* <<Getting_the_quota_count_for_a_domain,Getting the quota count for a domain>>
+* <<Updating_the_quota_count_for_a_domain,Updating the quota count for a domain>>
+* <<Deleting_the_quota_count_for_a_domain,Deleting the quota count for a domain>>
+* <<Getting_the_quota_size_for_a_domain,Getting the quota size for a domain>>
+* <<Updating_the_quota_size_for_a_domain,Updating the quota size for a domain>>
+* <<Deleting_the_quota_size_for_a_domain,Deleting the quota size for a domain>>
+
+=== Getting the quota for a domain
+
+----
+curl -XGET http://ip:port/quota/domains/{domainToBeUsed}
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+For example:
+
+----
+curl -XGET http://ip:port/quota/domains/james.org
+----
+
+The answer will detail the default quota applied to users belonging to that domain:
+
+----
+{
+  "global": {
+    "count":252,
+    "size":null
+  },
+  "domain": {
+    "count":null,
+    "size":142
+  },
+  "computed": {
+    "count":252,
+    "size":142
+  }
+}
+----
+
+* The `global` entry represents the quota limit defined on this James server by default.
+* The `domain` entry represents the quota limit allowed for the user of that domain by default.
+* The `computed` entry represents the quota limit applied for the users of that domain, by default, resolved from the upper values.
+
+Note that `quota` object can contain a fixed value, an empty value (null) or an unlimited value (-1):
+
+----
+{"count":52,"size":42}
+
+{"count":null,"size":null}
+
+{"count":52,"size":-1}
+----
+
+Response codes:
+
+* 200: The domain's quota was successfully retrieved
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Updating the quota for a domain
+
+----
+curl -XPUT http://ip:port/quota/domains/{domainToBeUsed}
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+The body can contain a fixed value, an empty value (null) or an unlimited value (-1):
+
+----
+{"count":52,"size":42}
+
+{"count":null,"size":null}
+
+{"count":52,"size":-1}
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Getting the quota count for a domain
+
+----
+curl -XGET http://ip:port/quota/domains/{domainToBeUsed}/count
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+The answer looks like:
+
+----
+52
+----
+
+Response codes:
+
+* 200: The domain's quota was successfully retrieved
+* 204: No quota count limit is defined at the domain level for this domain
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Updating the quota count for a domain
+
+----
+curl -XPUT http://ip:port/quota/domains/{domainToBeUsed}/count
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+The body can contain a fixed value or an unlimited value (-1):
+
+----
+52
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Deleting the quota count for a domain
+
+----
+curl -XDELETE http://ip:port/quota/domains/{domainToBeUsed}/count
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+Response codes:
+
+* 204: The quota has been updated to unlimited value.
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Getting the quota size for a domain
+
+----
+curl -XGET http://ip:port/quota/domains/{domainToBeUsed}/size
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+The answer looks like:
+
+----
+52
+----
+
+Response codes:
+
+* 200: The domain's quota was successfully retrieved
+* 204: No quota size limit is defined at the domain level for this domain
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Updating the quota size for a domain
+
+----
+curl -XPUT http://ip:port/quota/domains/{domainToBeUsed}/size
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+The body can contain a fixed value or an unlimited value (-1):
+
+----
+52
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+* 404: The domain does not exist
+* 405: Domain Quota configuration not supported when virtual hosting is desactivated.
+
+=== Deleting the quota size for a domain
+
+----
+curl -XDELETE http://ip:port/quota/domains/{domainToBeUsed}/size
+----
+
+Resource name `domainToBeUsed` should be an existing domain.
+
+Response codes:
+
+* 204: The quota has been updated to unlimited value.
+* 404: The domain does not exist
+
+== Administrating global quotas
+
+* <<Getting_the_global_quota,Getting the global quota>>
+* <<Updating_global_quota,Updating global quota>>
+* <<Getting_the_global_quota_count,Getting the global quota count>>
+* <<Updating_the_global_quota_count,Updating the global quota count>>
+* <<Deleting_the_global_quota_count,Deleting the global quota count>>
+* <<Getting_the_global_quota_size,Getting the global quota size>>
+* <<Updating_the_global_quota_size,Updating the global quota size>>
+* <<Deleting_the_global_quota_size,Deleting the global quota size>>
+
+=== Getting the global quota
+
+----
+curl -XGET http://ip:port/quota
+----
+
+The answer is the details of the global quota.
+
+----
+{
+  "count":252,
+  "size":242
+}
+----
+
+Note that `quota` object can contain a fixed value, an empty value (null) or an unlimited value (-1):
+
+----
+{"count":52,"size":42}
+
+{"count":null,"size":null}
+
+{"count":52,"size":-1}
+----
+
+Response codes:
+
+* 200: The quota was successfully retrieved
+
+=== Updating global quota
+
+----
+curl -XPUT http://ip:port/quota
+----
+
+The body can contain a fixed value, an empty value (null) or an unlimited value (-1):
+
+----
+{"count":52,"size":42}
+
+{"count":null,"size":null}
+
+{"count":52,"size":-1}
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+
+=== Getting the global quota count
+
+----
+curl -XGET http://ip:port/quota/count
+----
+
+Resource name usernameToBeUsed should be an existing user
+
+The answer looks like:
+
+----
+52
+----
+
+Response codes:
+
+* 200: The quota was successfully retrieved
+* 204: No quota count limit is defined at the global level
+
+=== Updating the global quota count
+
+----
+curl -XPUT http://ip:port/quota/count
+----
+
+The body can contain a fixed value or an unlimited value (-1):
+
+----
+52
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+
+=== Deleting the global quota count
+
+----
+curl -XDELETE http://ip:port/quota/count
+----
+
+Response codes:
+
+* 204: The quota has been updated to unlimited value.
+
+=== Getting the global quota size
+
+----
+curl -XGET http://ip:port/quota/size
+----
+
+The answer looks like:
+
+----
+52
+----
+
+Response codes:
+
+* 200: The quota was successfully retrieved
+* 204: No quota size limit is defined at the global level
+
+=== Updating the global quota size
+
+----
+curl -XPUT http://ip:port/quota/size
+----
+
+The body can contain a fixed value or an unlimited value (-1):
+
+----
+52
+----
+
+Response codes:
+
+* 204: The quota has been updated
+* 400: The body is not a positive integer neither an unlimited value (-1).
+
+=== Deleting the global quota size
+
+----
+curl -XDELETE http://ip:port/quota/size
+----
+
+Response codes:
+
+* 204: The quota has been updated to unlimited value.
+
+== Cassandra Schema upgrades
+
+Cassandra upgrades implies the creation of a new table.
+Thus restarting James is needed, as new tables are created on restart.
+
+Once done, we ship code that tries to read from new tables, and if not possible backs up to old tables.
+You can thus safely run without running additional migrations.
+
+On the fly migration can be enabled.
+However, one might want to force the migration in a controlled fashion, and update automatically current schema version used (assess in the database old versions is no more used, as the corresponding tables are empty).
+Note that this process is safe: we ensure the service is not running concurrently on this James instance, that it does not bump version upon partial failures, that race condition in version upgrades will be idempotent, etc...
+
+These schema updates can be triggered by webadmin using the Cassandra backend.
+
+Note that currently the progress can be tracked by logs.
+
+* <<Retrieving_current_Cassandra_schema_version,Retrieving current Cassandra schema version>>
+* <<Retrieving_latest_available_Cassandra_schema_version,Retrieving latest available Cassandra schema version>>
+* <<Upgrading_to_a_specific_version,Upgrading to a specific version>>
+* <<Upgrading_to_the_latest_version,Upgrading to the latest version>>
+
+=== Retrieving current Cassandra schema version
+
+----
+curl -XGET http://ip:port/cassandra/version
+----
+
+Will return:
+
+----
+{"version": 2}
+----
+
+Where the number corresponds to the current schema version of the database you are using.
+
+Response codes:
+
+* 200: Success
+
+=== Retrieving latest available Cassandra schema version
+
+----
+curl -XGET http://ip:port/cassandra/version/latest
+----
+
+Will return:
+
+----
+{"version": 3}
+----
+
+Where the number corresponds to the latest available schema version of the database you are using.
+This means you can be migrating to this schema version.
+
+Response codes:
+
+* 200: Success
+
+=== Upgrading to a specific version
+
+----
+curl -XPOST http://ip:port/cassandra/version/upgrade -d '3'
+----
+
+Will schedule the run of the migrations you need to reach schema version 3.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 200: Success.
+The scheduled task `taskId` is returned.
+* 400: The version is invalid.
+The version should be a strictly positive number.
+* 410: Error while planning this migration.
+This resource is gone away.
+Reason is mentionned in the body.
+
+Note that several calls to this endpoint will be run in a sequential pattern.
+
+If the server restarts during the migration, the migration is silently aborted.
+
+The scheduled task will have the following type `cassandra-migration` and the following `additionalInformation`:
+
+----
+{"targetVersion":3}
+----
+
+=== Upgrading to the latest version
+
+----
+curl -XPOST http://ip:port/cassandra/version/upgrade/latest
+----
+
+Will schedule the run of the migrations you need to reach the latest schema version.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 200: Success.
+The scheduled task `taskId` is returned.
+* 410: Error while planning this migration.
+This resource is gone away.
+Reason is mentionned in the body.
+
+Note that several calls to this endpoint will be run in a sequential pattern.
+
+If the server restarts during the migration, the migration is silently aborted.
+
+The scheduled task will have the following type `cassandra-migration` and the following `additionalInformation`:
+
+----
+{"toVersion":2}
+----
+
+== Correcting ghost mailbox
+
+This is a temporary workaround for the *Ghost mailbox* bug encountered using the Cassandra backend, as described in MAILBOX-322.
+
+You can use the mailbox merging feature in order to merge the old "ghosted" mailbox with the new one.
+
+----
+curl -XPOST http://ip:port/cassandra/mailbox/merging \
+  -d '{"mergeOrigin":"{id1}", "mergeDestination":"{id2}"}' \
+  -H "Content-Type: application/json"
+----
+
+Will scedule a task for :
+
+* Delete references to `id1` mailbox
+* Move it's messages into `id2` mailbox
+* Union the rights of both mailboxes
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* 400: Unable to parse the body.
+
+The scheduled task will have the following type `mailbox-merging` and the following `additionalInformation`:
+
+----
+{
+  "oldMailboxId":"5641376-02ed-47bd-bcc7-76ff6262d92a",
+  "newMailboxId":"4555159-52ae-895f-ccb7-586a4412fb50",
+  "totalMessageCount": 1,
+  "messageMovedCount": 1,
+  "messageFailedCount": 0
+}
+----
+
+== Creating address group
+
+You can use *webadmin* to define address groups.
+
+When a specific email is sent to the group mail address, every group member will receive it.
+
+Note that the group mail address is virtual: it does not correspond to an existing user.
+
+This feature uses link:/server/config-recipientrewritetable.html[Recipients rewrite table] and requires the https://github.com/apache/james-project/blob/master/server/mailet/mailets/src/main/java/org/apache/james/transport/mailets/RecipientRewriteTable.java[RecipientRewriteTable mailet] to be configured.
+
+Note that email addresses are restricted to ASCII character set.
+Mail addresses not matching this criteria will be rejected.
+
+* <<Listing_groups,Listing groups>>
+* <<Listing_members_of_a_group,Listing members of a group>>
+* <<Adding_a_group_member,Adding a group member>>
+* <<Removing_a_group_member,Removing a group member>>
+
+=== Listing groups
+
+----
+curl -XGET http://ip:port/address/groups
+----
+
+Will return the groups as a list of JSON Strings representing mail addresses.
+For instance:
+
+----
+["group1@domain.com", "group2@domain.com"]
+----
+
+Response codes:
+
+* 200: Success
+
+=== Listing members of a group
+
+----
+curl -XGET http://ip:port/address/groups/group@domain.com
+----
+
+Will return the group members as a list of JSON Strings representing mail addresses.
+For instance:
+
+----
+["member1@domain.com", "member2@domain.com"]
+----
+
+Response codes:
+
+* 200: Success
+* 400: Group structure is not valid
+* 404: The group does not exist
+
+=== Adding a group member
+
+----
+curl -XPUT http://ip:port/address/groups/group@domain.com/member@domain.com
+----
+
+Will add member@domain.com to group@domain.com, creating the group if needed
+
+Response codes:
+
+* 204: Success
+* 400: Group structure or member is not valid
+* 400: Domain in the source is not managed by the DomainList
+* 409: Requested group address is already used for another purpose
+
+=== Removing a group member
+
+----
+curl -XDELETE http://ip:port/address/groups/group@domain.com/member@domain.com
+----
+
+Will remove member@domain.com from group@domain.com, removing the group if group is empty after deletion
+
+Response codes:
+
+* 204: Success
+* 400: Group structure or member is not valid
+
+== Creating address forwards
+
+You can use *webadmin* to define address forwards.
+
+When a specific email is sent to the base mail address, every forward destination addresses will receive it.
+
+Please note that the base address can be optionaly part of the forward destination.
+In that case, the base recipient also receive a copy of the mail.
+Otherwise he is ommitted.
+
+Forwards can be defined for existing users.
+It then defers from "groups".
+
+This feature uses link:/server/config-recipientrewritetable.html[Recipients rewrite table] and requires the https://github.com/apache/james-project/blob/master/server/mailet/mailets/src/main/java/org/apache/james/transport/mailets/RecipientRewriteTable.java[RecipientRewriteTable mailet] to be configured.
+
+Note that email addresses are restricted to ASCII character set.
+Mail addresses not matching this criteria will be rejected.
+
+* <<Listing_Forwards,Listing Forwards>>
+* <<Listing_destinations_in_a_forward,Listing destinations in a forward>>
+* <<Adding_a_new_destination_to_a_forward,Adding a new destination to a forward>>
+* <<Removing_a_destination_of_a_forward,Removing a destination of a forward>>
+
+=== Listing Forwards
+
+----
+curl -XGET http://ip:port/address/forwards
+----
+
+Will return the users having forwards configured as a list of JSON Strings representing mail addresses.
+For instance:
+
+----
+["user1@domain.com", "user2@domain.com"]
+----
+
+Response codes:
+
+* 200: Success
+
+=== Listing destinations in a forward
+
+----
+curl -XGET http://ip:port/address/forwards/user@domain.com
+----
+
+Will return the destination addresses of this forward as a list of JSON Strings representing mail addresses.
+For instance:
+
+----
+[
+  {"mailAddress":"destination1@domain.com"},
+  {"mailAddress":"destination2@domain.com"}
+]
+----
+
+Response codes:
+
+* 200: Success
+* 400: Forward structure is not valid
+* 404: The given user don't have forwards or does not exist
+
+=== Adding a new destination to a forward
+
+----
+curl -XPUT http://ip:port/address/forwards/user@domain.com/targets/destination@domain.com
+----
+
+Will add destination@domain.com to user@domain.com, creating the forward if needed
+
+Response codes:
+
+* 204: Success
+* 400: Forward structure or member is not valid
+* 400: Domain in the source is not managed by the DomainList
+* 404: Requested forward address does not match an existing user
+
+=== Removing a destination of a forward
+
+----
+curl -XDELETE http://ip:port/address/forwards/user@domain.com/targets/destination@domain.com
+----
+
+Will remove destination@domain.com from user@domain.com, removing the forward if forward is empty after deletion
+
+Response codes:
+
+* 204: Success
+* 400: Forward structure or member is not valid
+
+== Creating address aliases
+
+You can use *webadmin* to define aliases for an user.
+
+When a specific email is sent to the alias address, the destination address of the alias will receive it.
+
+Aliases can be defined for existing users.
+
+This feature uses link:/server/config-recipientrewritetable.html[Recipients rewrite table] and requires the https://github.com/apache/james-project/blob/master/server/mailet/mailets/src/main/java/org/apache/james/transport/mailets/RecipientRewriteTable.java[RecipientRewriteTable mailet] to be configured.
+
+Note that email addresses are restricted to ASCII character set.
+Mail addresses not matching this criteria will be rejected.
+
+* <<Listing_users_with_aliases,Listing users with aliases>>
+* <<Listing_alias_sources_of_an_user,Listing alias sources of an user>>
+* <<Adding_a_new_alias_to_an_user,Adding a new alias to an user>>
+* <<Removing_an_alias_of_an_user,Removing an alias of an user>>
+
+=== Listing users with aliases
+
+----
+curl -XGET http://ip:port/address/aliases
+----
+
+Will return the users having aliases configured as a list of JSON Strings representing mail addresses.
+For instance:
+
+----
+["user1@domain.com", "user2@domain.com"]
+----
+
+Response codes:
+
+* 200: Success
+
+=== Listing alias sources of an user
+
+----
+curl -XGET http://ip:port/address/aliases/user@domain.com
+----
+
+Will return the aliases of this user as a list of JSON Strings representing mail addresses.
+For instance:
+
+----
+[
+  {"source":"alias1@domain.com"},
+  {"source":"alias2@domain.com"}
+]
+----
+
+Response codes:
+
+* 200: Success
+* 400: Alias structure is not valid
+
+=== Adding a new alias to an user
+
+----
+curl -XPUT http://ip:port/address/aliases/user@domain.com/sources/alias@domain.com
+----
+
+Will add alias@domain.com to user@domain.com, creating the alias if needed
+
+Response codes:
+
+* 204: OK
+* 400: Alias structure or member is not valid
+* 400: The alias source exists as an user already
+* 400: Source and destination can't be the same!
+* 400: Domain in the destination or source is not managed by the DomainList
+
+=== Removing an alias of an user
+
+----
+curl -XDELETE http://ip:port/address/aliases/user@domain.com/sources/alias@domain.com
+----
+
+Will remove alias@domain.com from user@domain.com, removing the alias if needed
+
+Response codes:
+
+* 204: OK
+* 400: Alias structure or member is not valid
+
+== Creating domain mappings
+
+You can use *webadmin* to define domain mappings.
+
+Given a configured source (from) domain and a destination (to) domain, when an email is sent to an address belonging to the source domain, then the domain part of this address is overwritten, the destination domain is then used.
+A source (from) domain can have many destination (to) domains.
+
+For example: with a source domain `james.apache.org` maps to two destination domains `james.org` and `apache-james.org`, when a mail is sent to `admin@james.apache.org`, then it will be routed to `admin@james.org` and `admin@apache-james.org`
+
+This feature uses link:/server/config-recipientrewritetable.html[Recipients rewrite table] and requires the https://github.com/apache/james-project/blob/master/server/mailet/mailets/src/main/java/org/apache/james/transport/mailets/RecipientRewriteTable.java[RecipientRewriteTable mailet] to be configured.
+
+Note that email addresses are restricted to ASCII character set.
+Mail addresses not matching this criteria will be rejected.
+
+* <<Listing_all_domain_mappings,Listing all domain mappings>>
+* <<Listing_all_destination_domains_for_a_source_domain,Listing all destination domains for a source domain>>
+* <<Adding_a_domain_mapping,Adding a domain mapping>>
+* <<Removing_a_domain_mapping,Removing a domain mapping>>
+
+=== Listing all domain mappings
+
+----
+curl -XGET http://ip:port/domainMappings
+----
+
+Will return all configured domain mappings
+
+----
+{
+  "firstSource.org" : ["firstDestination.com", "secondDestination.net"],
+  "secondSource.com" : ["thirdDestination.com", "fourthDestination.net"],
+}
+----
+
+Response codes:
+
+* 200: OK
+
+=== Listing all destination domains for a source domain
+
+----
+curl -XGET http://ip:port/domainMappings/sourceDomain.tld
+----
+
+With `sourceDomain.tld` as the value passed to `fromDomain` resource name, the API will return all destination domains configured to that domain
+
+----
+["firstDestination.com", "secondDestination.com"]
+----
+
+Response codes:
+
+* 200: OK
+* 400: The `fromDomain` resource name is invalid
+* 404: The `fromDomain` resource name is not found
+
+=== Adding a domain mapping
+
+----
+curl -XPUT http://ip:port/domainMappings/sourceDomain.tld
+----
+
+Body:
+
+----
+destination.tld
+----
+
+With `sourceDomain.tld` as the value passed to `fromDomain` resource name, the API will add a destination domain specified in the body to that domain
+
+Response codes:
+
+* 204: OK
+* 400: The `fromDomain` resource name is invalid
+* 400: The destination domain specified in the body is invalid
+
+=== Removing a domain mapping
+
+----
+curl -XDELETE http://ip:port/domainMappings/sourceDomain.tld
+----
+
+Body:
+
+----
+destination.tld
+----
+
+With `sourceDomain.tld` as the value passed to `fromDomain` resource name, the API will remove a destination domain specified in the body mapped to that domain
+
+Response codes:
+
+* 204: OK
+* 400: The `fromDomain` resource name is invalid
+* 400: The destination domain specified in the body is invalid
+
+== Creating regex mapping
+
+You can use *webadmin* to create regex mappings.
+
+A regex mapping contains a mapping source and a Java Regular Expression (regex) in String as the mapping value.
+Everytime, if a mail containing a recipient matched with the mapping source, then that mail will be re-routed to a new recipient address which is re written by the regex.
+
+This feature uses link:/server/config-recipientrewritetable.html[Recipients rewrite table] and requires the https://github.com/apache/james-project/blob/master/server/mailet/mailets/src/main/java/org/apache/james/transport/mailets/RecipientRewriteTable.java[RecipientRewriteTable API] to be configured.
+
+* <<Adding_a_regex_mapping,Adding a regex mapping>>
+* <<Removing_a_regex_mapping,Removing a regex mapping>>
+
+=== Adding a regex mapping
+
+----
+POST /mappings/regex/mappingSource/targets/regex
+----
+
+Where:
+
+* the `mappingSource` is the path parameter represents for the Regex Mapping mapping source
+* the `regex` is the path parameter represents for the Regex Mapping regex
+
+The route will add a regex mapping made from `mappingSource` and `regex` to RecipientRewriteTable.
+
+Example:
+
+----
+curl -XPOST http://ip:port/mappings/regex/james@domain.tld/targets/james@.*:james-intern@james.org
+----
+
+Response codes:
+
+* 204: Mapping added successfully.
+* 400: Invalid `mappingSource` path parameter.
+* 400: Invalid `regex` path parameter.
+
+=== Removing a regex mapping
+
+----
+DELETE /mappings/regex/{mappingSource}/targets/{regex}
+----
+
+Where:
+
+* the `mappingSource` is the path parameter representing the Regex Mapping mapping source
+* the `regex` is the path parameter representing the Regex Mapping regex
+
+The route will remove the regex mapping made from `regex` from the mapping source `mappingSource`  to RecipientRewriteTable.
+
+Example:
+
+----
+curl -XDELETE http://ip:port/mappings/regex/james@domain.tld/targets/[O_O]:james-intern@james.org
+----
+
+Response codes:
+
+* 204: Mapping deleted successfully.
+* 400: Invalid `mappingSource` path parameter.
+* 400: Invalid `regex` path parameter.
+
+== Address Mappings
+
+You can use *webadmin* to define address mappings.
+
+When a specific email is sent to the base mail address, every destination addresses will receive it.
+
+This feature uses link:/server/config-recipientrewritetable.html[Recipients rewrite table] and requires the https://github.com/apache/james-project/blob/master/server/mailet/mailets/src/main/java/org/apache/james/transport/mailets/RecipientRewriteTable.java[RecipientRewriteTable mailet] to be configured.
+
+Note that email addresses are restricted to ASCII character set.
+Mail addresses not matching this criteria will be rejected.
+
+Please use address mappings with caution, as it's not a typed address.
+If you know the type of your address (forward, alias, domain, group, etc), prefer using the corresponding routes to those types.
+
+Here are the following actions available on address mappings:
+
+* <<List_all_address_mappings,List all address mappings>>
+* <<Add_an_address_mapping,Add an address mapping>>
+* <<Remove_an_address_mapping,Remove an address mapping>>
+
+=== List all address mappings
+
+----
+curl -XGET http://ip:port/mappings
+----
+
+Get all mappings from the link:/server/config-recipientrewritetable.html[Recipients rewrite table] Supported mapping types are the following:
+
+* <<Creating_address_aliases,Alias>>
+* <<Address_Mappings,Address>>
+* <<Creating_address_domain,Domain>>
+* Error
+* <<Creating_address_forwards,Forward>>
+* <<Creating_address_group,Group>>
+* Regex
+
+Response body:
+
+----
+{
+  "alias@domain.tld": [
+    {
+      "type": "Alias",
+      "mapping": "user@domain.tld"
+    },
+    {
+      "type": "Group",
+      "mapping": "group-user@domain.tld"
+    }
+  ],
+  "aliasdomain.tld": [
+    {
+      "type": "Domain",
+      "mapping": "realdomain.tld"
+    }
+  ],
+  "group@domain.tld": [
+    {
+      "type": "Address",
+      "mapping": "user@domain.tld"
+    }
+  ]
+}
+----
+
+Response code:
+
+* 200: OK
+
+=== Add an address mapping
+
+----
+curl -XPOST http://ip:port/mappings/address/{mappingSource}/targets/{destinationAddress}
+----
+
+Add an address mapping to the link:/server/config-recipientrewritetable.html[Recipients rewrite table] Mapping source is the value of \{mappingSource} Mapping destination is the value of \{destinationAddress} Type of mapping destination is Address
+
+Response codes:
+
+* 204: Action successfully performed
+* 400: Invalid parameters
+
+=== Remove an address mapping
+
+----
+curl -XDELETE http://ip:port/mappings/address/{mappingSource}/targets/{destinationAddress}
+----
+
+* Remove an address mapping from the link:/server/config-recipientrewritetable.html[Recipients rewrite table]
+* Mapping source is the value of `mappingSource`
+* Mapping destination is the value of `destinationAddress`
+* Type of mapping destination is Address
+
+Response codes:
+
+* 204: Action successfully performed
+* 400: Invalid parameters
+
+== User Mappings
+
+* <<Listing_User_Mappings,Listing User Mappings>>
+
+=== Listing User Mappings
+
+This endpoint allows receiving all mappings of a corresponding user.
+
+----
+curl -XGET http://ip:port/mappings/user/{userAddress}
+----
+
+Return all mappings of a user where:
+
+* `userAddress`: is the selected user
+
+Response body:
+
+----
+[
+  {
+    "type": "Address",
+    "mapping": "user123@domain.tld"
+  },
+  {
+    "type": "Alias",
+    "mapping": "aliasuser123@domain.tld"
+  },
+  {
+    "type": "Group",
+    "mapping": "group123@domain.tld"
+  }
+]
+----
+
+Response codes:
+
+* 200: OK
+* 400: Invalid parameter value
+
+== Administrating mail repositories
+
+* <<Create_a_mail_repository,Create a mail repository>>
+* <<Listing_mail_repositories,Listing mail repositories>>
+* <<Getting_additional_information_for_a_mail_repository,Getting additional information for a mail repository>>
+* <<Listing_mails_contained_in_a_mail_repository,Listing mails contained in a mail repository>>
+* <<Reading.2Fdownloading_a_mail_details,Reading/downloading a mail details>>
+* <<Removing_a_mail_from_a_mail_repository,Removing a mail from a mail repository>>
+* <<Removing_all_mails_from_a_mail_repository,Removing all mails from a mail repository>>
+* <<Reprocessing_mails_from_a_mail_repository,Reprocessing mails from a mail repository>>
+* <<Reprocessing_a_specific_mail_from_a_mail_repository,Reprocessing a specific mail from a mail repository>>
+
+=== Create a mail repository
+
+----
+curl -XPUT http://ip:port/mailRepositories/{encodedPathOfTheRepository}?protocol={someProtocol}
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of the created mail repository.
+Example:
+
+----
+curl -XPUT http://ip:port/mailRepositories/mailRepo?protocol=file
+----
+
+Response codes:
+
+* 204: The repository is created
+
+=== Listing mail repositories
+
+----
+curl -XGET http://ip:port/mailRepositories
+----
+
+The answer looks like:
+
+----
+[
+    {
+        "repository": "var/mail/error/",
+        "path": "var%2Fmail%2Ferror%2F"
+    },
+    {
+        "repository": "var/mail/relay-denied/",
+        "path": "var%2Fmail%2Frelay-denied%2F"
+    },
+    {
+        "repository": "var/mail/spam/",
+        "path": "var%2Fmail%2Fspam%2F"
+    },
+    {
+        "repository": "var/mail/address-error/",
+        "path": "var%2Fmail%2Faddress-error%2F"
+    }
+]
+----
+
+You can use `id`, the encoded URL of the repository, to access it in later requests.
+
+Response codes:
+
+* 200: The list of mail repositories
+
+=== Getting additional information for a mail repository
+
+----
+curl -XGET http://ip:port/mailRepositories/{encodedPathOfTheRepository}
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of an existing mail repository.
+Example:
+
+----
+curl -XGET http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F
+----
+
+The answer looks like:
+
+----
+{
+   "repository": "var/mail/error/",
+   "path": "mail%2Ferror%2F",
+   "size": 243
+}
+----
+
+Response codes:
+
+* 200: Additonnal information for that repository
+* 404: This repository can not be found
+
+=== Listing mails contained in a mail repository
+
+----
+curl -XGET http://ip:port/mailRepositories/{encodedPathOfTheRepository}/mails
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of an existing mail repository.
+Example:
+
+----
+curl -XGET http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails
+----
+
+The answer will contains all mailKey contained in that repository.
+
+----
+[
+    "mail-key-1",
+    "mail-key-2",
+    "mail-key-3"
+]
+----
+
+Note that this can be used to read mail details.
+
+You can pass additional URL parameters to this call in order to limit the output:
+
+* A limit: no more elements than the specified limit will be returned.
+This needs to be strictly positive.
+If no value is specified, no limit will be applied.
+* An offset: allow to skip elements.
+This needs to be positive.
+Default value is zero.
+
+Example:
+
+----
+curl -XGET 'http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails?limit=100&offset=500'
+----
+
+Response codes:
+
+* 200: The list of mail keys contained in that mail repository
+* 400: Invalid parameters
+* 404: This repository can not be found
+
+=== Reading/downloading a mail details
+
+----
+curl -XGET http://ip:port/mailRepositories/{encodedPathOfTheRepository}/mails/mailKey
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of an existing mail repository.
+Resource name `mailKey` should be the key of a mail stored in that repository.
+Example:
+
+----
+curl -XGET http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails/mail-key-1
+----
+
+If the Accept header in the request is "application/json", then the response looks like:
+
+----
+{
+    "name": "mail-key-1",
+    "sender": "sender@domain.com",
+    "recipients": ["recipient1@domain.com", "recipient2@domain.com"],
+    "state": "address-error",
+    "error": "A small message explaining what happened to that mail...",
+    "remoteHost": "111.222.333.444",
+    "remoteAddr": "127.0.0.1",
+    "lastUpdated": null
+}
+----
+
+If the Accept header in the request is "message/rfc822", then the response will be the _eml_ file itself.
+
+Additional query parameter `additionalFields` add the existing information  to the response for the supported values (only work with "application/json" Accept header):
+
+* attributes
+* headers
+* textBody
+* htmlBody
+* messageSize
+* perRecipientsHeaders
+
+----
+curl -XGET http://ip:port/mailRepositories/file%3A%2F%2Fvar%2Fmail%2Ferror%2F/mails/mail-key-1?additionalFields=attributes,headers,textBody,htmlBody,messageSize,perRecipientsHeaders
+----
+
+Give the following kind of response:
+
+----
+{
+    "name": "mail-key-1",
+    "sender": "sender@domain.com",
+    "recipients": ["recipient1@domain.com", "recipient2@domain.com"],
+    "state": "address-error",
+    "error": "A small message explaining what happened to that mail...",
+    "remoteHost": "111.222.333.444",
+    "remoteAddr": "127.0.0.1",
+    "lastUpdated": null,
+    "attributes": {
+      "name2": "value2",
+      "name1": "value1"
+    },
+    "perRecipientsHeaders": {
+      "third@party": {
+        "headerName1": [
+          "value1",
+          "value2"
+        ],
+        "headerName2": [
+          "value3",
+          "value4"
+        ]
+      }
+    },
+    "headers": {
+      "headerName4": [
+        "value6",
+        "value7"
+      ],
+      "headerName3": [
+        "value5",
+        "value8"
+      ]
+    },
+    "textBody": "My body!!",
+    "htmlBody": "My <em>body</em>!!",
+    "messageSize": 42424242
+}
+----
+
+Response codes:
+
+* 200: Details of the mail
+* 404: This repository or mail can not be found
+
+=== Removing a mail from a mail repository
+
+----
+curl -XDELETE http://ip:port/mailRepositories/{encodedPathOfTheRepository}/mails/mailKey
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of an existing mail repository.
+Resource name `mailKey` should be the key of a mail stored in that repository.
+Example:
+
+----
+curl -XDELETE http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails/mail-key-1
+----
+
+Response codes:
+
+* 204: This mail no longer exists in this repository
+* 404: This repository can not be found
+
+=== Removing all mails from a mail repository
+
+----
+curl -XDELETE http://ip:port/mailRepositories/{encodedPathOfTheRepository}/mails
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of an existing mail repository.
+Example:
+
+----
+curl -XDELETE http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* 404: Could not find that mail repository
+
+The scheduled task will have the following type `clear-mail-repository` and the following `additionalInformation`:
+
+----
+{
+  "mailRepositoryPath":"var/mail/error/",
+  "initialCount": 243,
+  "remainingCount": 17
+}
+----
+
+=== Reprocessing mails from a mail repository
+
+Sometime, you want to re-process emails stored in a mail repository.
+For instance, you can make a configuration error, or there can be a James bug that makes processing of some mails fail.
+Those mail will be stored in a mail repository.
+Once you solved the problem, you can reprocess them.
+
+To reprocess mails from a repository:
+
+----
+curl -XPATCH http://ip:port/mailRepositories/{encodedPathOfTheRepository}/mails?action=reprocess
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource path of an existing mail repository.
+Example:
+
+For instance:
+
+----
+curl -XPATCH http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails?action=reprocess
+----
+
+Additional query parameters are supported:
+
+* `queue` allows you to target the mail queue you want to enqueue the mails in.
+Defaults to `spool`.
+* `processor` allows you to overwrite the state of the reprocessing mails, and thus select the processors they will start their processing in.
+Defaults to the `state` field of each processed email.
+
+For instance:
+
+----
+curl -XPATCH 'http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails?action=reprocess&processor=transport&queue=spool'
+----
+
+Note that the `action` query parameter is compulsary and can only take value `reprocess`.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* 404: Could not find that mail repository
+
+The scheduled task will have the following type `reprocessing-all` and the following `additionalInformation`:
+
+----
+{
+  "mailRepositoryPath":"var/mail/error/",
+  "targetQueue":"spool",
+  "targetProcessor":"transport",
+  "initialCount": 243,
+  "remainingCount": 17
+}
+----
+
+=== Reprocessing a specific mail from a mail repository
+
+To reprocess a specific mail from a mail repository:
+
+----
+curl -XPATCH http://ip:port/mailRepositories/{encodedPathOfTheRepository}/mails/mailKey?action=reprocess
+----
+
+Resource name `encodedPathOfTheRepository` should be the resource id of an existing mail repository.
+Resource name `mailKey` should be the key of a mail stored in that repository.
+Example:
+
+For instance:
+
+----
+curl -XPATCH http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails/name1?action=reprocess
+----
+
+Additional query parameters are supported:
+
+* `queue` allows you to target the mail queue you want to enqueue the mails in.
+Defaults to `spool`.
+* `processor` allows you to overwrite the state of the reprocessing mails, and thus select the processors they will start their processing in.
+Defaults to the `state` field of each processed email.
+
+While `processor` being an optional parameter, not specifying it will result reprocessing the mails in their current state (https://james.apache.org/server/feature-mailetcontainer.html#Processors[see documentation about processors and state]).
+Consequently, only few cases will give a different result, definitively storing them out of the mail repository.
+
+For instance:
+
+----
+curl -XPATCH 'http://ip:port/mailRepositories/var%2Fmail%2Ferror%2F/mails/name1?action=reprocess&processor=transport&queue=spool'
+----
+
+Note that the `action` query parameter is compulsary and can only take value `reprocess`.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* 404: Could not find that mail repository
+
+The scheduled task will have the following type `reprocessing-one` and the following `additionalInformation`:
+
+----
+{
+  "mailRepositoryPath":"var/mail/error/",
+  "targetQueue":"spool",
+  "targetProcessor":"transport",
+  "mailKey":"name1"
+}
+----
+
+== Administrating mail queues
+
+* <<Listing_mail_queues,Listing mail queues>>
+* <<Getting_a_mail_queue_details,Getting a mail queue details>>
+* <<Listing_the_mails_of_a_mail_queue,Listing the mails of a mail queue>>
+* <<Deleting_mails_from_a_mail_queue,Deleting mails from a mail queue>>
+* <<Clearing_a_mail_queue,Clearing a mail queue>>
+* <<Flushing_mails_from_a_mail_queue,Flushing mails from a mail queue>>
+
+=== Listing mail queues
+
+----
+curl -XGET http://ip:port/mailQueues
+----
+
+The answer looks like:
+
+----
+["outgoing","spool"]
+----
+
+Response codes:
+
+* 200: The list of mail queues
+
+=== Getting a mail queue details
+
+----
+curl -XGET http://ip:port/mailQueues/{mailQueueName}
+----
+
+Resource name `mailQueueName` is the name of a mail queue, this command will return the details of the given mail queue.
+For instance:
+
+----
+{"name":"outgoing","size":0}
+----
+
+Response codes:
+
+* 200: Success
+* 400: Mail queue is not valid
+* 404: The mail queue does not exist
+
+=== Listing the mails of a mail queue
+
+----
+curl -XGET http://ip:port/mailQueues/{mailQueueName}/mails
+----
+
+Additional URL query parameters:
+
+* `limit`: Maximum number of mails returned in a single call.
+Only strictly positive integer values are accepted.
+Example:
+
+----
+curl -XGET http://ip:port/mailQueues/{mailQueueName}/mails?limit=100
+----
+
+The answer looks like:
+
+----
+[{
+  "name": "Mail1516976156284-8b3093b9-eebf-4c40-9c26-1450f4fcdc3c-to-test.com",
+  "sender": "user@james.linagora.com",
+  "recipients": ["someone@test.com"],
+  "nextDelivery": "1969-12-31T23:59:59.999Z"
+}]
+----
+
+Response codes:
+
+* 200: Success
+* 400: Mail queue is not valid or limit is invalid
+* 404: The mail queue does not exist
+
+=== Deleting mails from a mail queue
+
+----
+curl -XDELETE http://ip:port/mailQueues/{mailQueueName}/mails?sender=senderMailAddress
+----
+
+This request should have exactly one query parameter from the following list:
+
+* sender: which is a mail address (i.e.
+sender@james.org)
+* name: which is a string
+* recipient: which is a mail address (i.e.
+recipient@james.org)
+
+The mails from the given mail queue matching the query parameter will be deleted.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* 400: Invalid request
+* 404: The mail queue does not exist
+
+The scheduled task will have the following type `delete-mails-from-mail-queue` and the following `additionalInformation`:
+
+----
+{
+  "queue":"outgoing",
+  "initialCount":10,
+  "remainingCount": 5,
+  "sender": "sender@james.org",
+  "name": "Java Developer",
+  "recipient: "recipient@james.org"
+}
+----
+
+=== Clearing a mail queue
+
+----
+curl -XDELETE http://ip:port/mailQueues/{mailQueueName}/mails
+----
+
+All mails from the given mail queue will be deleted.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* 400: Invalid request
+* 404: The mail queue does not exist
+
+The scheduled task will have the following type `clear-mail-queue` and the following `additionalInformation`:
+
+----
+{
+  "queue":"outgoing",
+  "initialCount":10,
+  "remainingCount": 0
+}
+----
+
+=== Flushing mails from a mail queue
+
+----
+curl -XPATCH http://ip:port/mailQueues/{mailQueueName}?delayed=true \
+  -d '{"delayed": false}' \
+  -H "Content-Type: application/json"
+----
+
+This request should have the query parameter _delayed_ set to _true_, in order to indicate only delayed mails are affected.
+The payload should set the `delayed` field to false inorder to remove the delay.
+This is the only supported combination, and it performs a flush.
+
+The mails delayed in the given mail queue will be flushed.
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 204: Success (No content)
+* 400: Invalid request
+* 404: The mail queue does not exist
+
+== Administrating DLP Configuration
+
+DLP (stands for Data Leak Prevention) is supported by James.
+A DLP matcher will, on incoming emails, execute regular expressions on email sender, recipients or content, in order to report suspicious emails to an administrator.
+WebAdmin can be used to manage these DLP rules on a per `senderDomain` basis.
+
+`senderDomain` is domain of the sender of incoming emails, for example: `apache.org`, `james.org`,...
+Each `senderDomain` correspond to a distinct DLP configuration.
+
+* <<List_DLP_configuration_by_sender_domain,List DLP configuration by sender domain>>
+* <<Store_DLP_configuration_by_sender_domain,Store DLP configuration by sender domain>>
+* <<Remove_DLP_configuration_by_sender_domain,Remove DLP configuration by sender domain>>
+* <<Fetch_a_DLP_configuration_item_by_sender_domain_and_rule_id,Fetch a DLP configuration item by sender domain and rule id>>
+
+=== List DLP configuration by sender domain
+
+Retrieve a DLP configuration for corresponding `senderDomain`, a configuration contains list of configuration items
+
+----
+curl -XGET http://ip:port/dlp/rules/{senderDomain}
+----
+
+Response codes:
+
+* 200: A list of dlp configuration items is returned
+* 400: Invalid `senderDomain` or payload in request
+* 404: The domain does not exist.
+
+This is an example of returned body.
+The rules field is a list of rules as described below.
+
+----
+{"rules : [
+  {
+    "id": "1",
+    "expression": "james.org",
+    "explanation": "Find senders or recipients containing james[any char]org",
+    "targetsSender": true,
+    "targetsRecipients": true,
+    "targetsContent": false
+  },
+  {
+    "id": "2",
+    "expression": "Find senders containing apache[any char]org",
+    "explanation": "apache.org",
+    "targetsSender": true,
+    "targetsRecipients": false,
+    "targetsContent": false
+  }
+]}
+----
+
+=== Store DLP configuration by sender domain
+
+Store a DLP configuration for corresponding `senderDomain`, if any item of DLP configuration in the request is stored before,  it will not be stored anymore
+
+----
+curl -XPUT http://ip:port/dlp/rules/{senderDomain}
+----
+
+The body can contain a list of DLP configuration items formed by those fields:
+
+* `id`(String) is mandatory, unique identifier of the configuration item
+* `expression`(String) is mandatory, regular expression to match contents of targets
+* `explanation`(String) is optional, description of the configuration item
+* `targetsSender`(boolean) is optional and defaults to false.
+If true, `expression` will be applied to Sender and to From headers of the mail
+* `targetsContent`(boolean) is optional and defaults to false.
+If true, `expression` will be applied to Subject headers and textual bodies (text/plain and text/html) of the mail
+* `targetsRecipients`(boolean) is optional and defaults to false.
+If true, `expression` will be applied to recipients of the mail
+
+This is an example of returned body.
+The rules field is a list of rules as described below.
+
+----
+{"rules": [
+  {
+    "id": "1",
+    "expression": "james.org",
+    "explanation": "Find senders or recipients containing james[any char]org",
+    "targetsSender": true,
+    "targetsRecipients": true,
+    "targetsContent": false
+  },
+  {
+    "id": "2",
+    "expression": "Find senders containing apache[any char]org",
+    "explanation": "apache.org",
+    "targetsSender": true,
+    "targetsRecipients": false,
+    "targetsContent": false
+  }
+]}
+----
+
+Response codes:
+
+* 204: List of dlp configuration items is stored
+* 400: Invalid `senderDomain` or payload in request
+* 404: The domain does not exist.
+
+=== Remove DLP configuration by sender domain
+
+Remove a DLP configuration for corresponding `senderDomain`
+
+----
+curl -XDELETE http://ip:port/dlp/rules/{senderDomain}
+----
+
+Response codes:
+
+* 204: DLP configuration is removed
+* 400: Invalid `senderDomain` or payload in request
+* 404: The domain does not exist.
+
+=== Fetch a DLP configuration item by sender domain and rule id
+
+Retrieve a DLP configuration rule for corresponding `senderDomain` and a `ruleId`
+
+----
+curl -XGET http://ip:port/dlp/rules/{senderDomain}/rules/{ruleId}
+----
+
+Response codes:
+
+* 200: A dlp configuration item is returned
+* 400: Invalid `senderDomain` or payload in request
+* 404: The domain and/or the rule does not exist.
+
+This is an example of returned body.
+
+----
+{
+  "id": "1",
+  "expression": "james.org",
+  "explanation": "Find senders or recipients containing james[any char]org",
+  "targetsSender": true,
+  "targetsRecipients": true,
+  "targetsContent": false
+}
+----
+
+== Administrating Sieve quotas
+
+Some limitations on space Users Sieve script can occupy can be configured by default, and overridden by user.
+
+* <<Retrieving_global_sieve_quota,Retrieving global sieve quota>>
+* <<Updating_global_sieve_quota,Updating global sieve quota>>
+* <<Removing_global_sieve_quota,Removing global sieve quota>>
+* <<Retrieving_user_sieve_quota,Retrieving user sieve quota>>
+* <<Updating_user_sieve_quota,Updating user sieve quota>>
+* <<Removing_user_sieve_quota,Removing user sieve quota>>
+
+=== Retrieving global sieve quota
+
+This endpoints allows to retrieve the global Sieve quota, which will be users default:
+
+----
+curl -XGET http://ip:port/sieve/quota/default
+----
+
+Will return the bytes count allowed by user per default on this server.
+
+----
+102400
+----
+
+Response codes:
+
+* 200: Request is a success and the value is returned
+* 204: No default quota is being configured
+
+=== Updating global sieve quota
+
+This endpoints allows to update the global Sieve quota, which will be users default:
+
+----
+curl -XPUT http://ip:port/sieve/quota/default
+----
+
+With the body being the bytes count allowed by user per default on this server.
+
+----
+102400
+----
+
+Response codes:
+
+* 204: Operation succeeded
+* 400: Invalid payload
+
+=== Removing global sieve quota
+
+This endpoints allows to remove the global Sieve quota.
+There will no more be users default:
+
+----
+curl -XDELETE http://ip:port/sieve/quota/default
+----
+
+Response codes:
+
+* 204: Operation succeeded
+
+=== Retrieving user sieve quota
+
+This endpoints allows to retrieve the Sieve quota of a user, which will be this users quota:
+
+----
+curl -XGET http://ip:port/sieve/quota/users/user@domain.com
+----
+
+Will return the bytes count allowed for this user.
+
+----
+102400
+----
+
+Response codes:
+
+* 200: Request is a success and the value is returned
+* 204: No quota is being configured for this user
+
+=== Updating user sieve quota
+
+This endpoints allows to update the Sieve quota of a user, which will be users default:
+
+----
+curl -XPUT http://ip:port/sieve/quota/users/user@domain.com
+----
+
+With the body being the bytes count allowed for this user on this server.
+
+----
+102400
+----
+
+Response codes:
+
+* 204: Operation succeeded
+* 400: Invalid payload
+
+=== Removing user sieve quota
+
+This endpoints allows to remove the Sieve quota of a user.
+There will no more quota for this user:
+
+----
+curl -XDELETE http://ip:port/sieve/quota/users/user@domain.com
+----
+
+Response codes:
+
+* 204: Operation succeeded
+
+== Event Dead Letter
+
+The EventBus allows to register 'group listeners' that are called in a (potentially) distributed fashion.
+These group listeners enable the implementation of some advanced mailbox manager feature like indexing, spam reporting, quota management and the like.
+
+Upon exceptions, a bounded number of retries are performed (with exponential backoff delays).
+If after those retries the listener is still failing, then the event will be stored in the "Event Dead Letter".
+This API allows diagnosing issues, as well as performing event replay (not implemented yet).
+
+* <<Event_Dead_Letter,Event Dead Letter>>
+* <<Listing_mailbox_listener_groups,Listing mailbox listener groups>>
+* <<Listing_failed_events,Listing failed events>>
+* <<Getting_event_details,Getting event details>>
+* <<Deleting_an_event,Deleting an event>>
+* <<Redeliver_all_events,Redeliver all events>>
+* <<Redeliver_group_events,Redeliver group events>>
+* <<Redeliver_a_single_event,Redeliver a single event>>
+* <<Rescheduling_group_execution,Rescheduling group execution>>
+
+=== Listing mailbox listener groups
+
+This endpoint allows discovering the list of mailbox listener groups.
+
+----
+curl -XGET http://ip:port/events/deadLetter/groups
+----
+
+Will return a list of group names that can be further used to interact with the dead letter API:
+
+----
+["org.apache.james.mailbox.events.EventBusTestFixture$GroupA", "org.apache.james.mailbox.events.GenericGroup-abc"]
+----
+
+Response codes:
+
+* 200: Success.
+A list of group names is returned.
+
+=== Listing failed events
+
+This endpoint allows listing failed events for a given group:
+
+----
+curl -XGET http://ip:port/events/deadLetter/groups/org.apache.james.mailbox.events.EventBusTestFixture$GroupA
+----
+
+Will return a list of insertionIds:
+
+----
+["6e0dd59d-660e-4d9b-b22f-0354479f47b4", "58a8f59d-660e-4d9b-b22f-0354486322a2"]
+----
+
+Response codes:
+
+* 200: Success.
+A list of insertion ids is returned.
+* 400: Invalid group name
+
+=== Getting event details
+
+----
+curl -XGET http://ip:port/events/deadLetter/groups/org.apache.james.mailbox.events.EventBusTestFixture$GroupA/6e0dd59d-660e-4d9b-b22f-0354479f47b4
+----
+
+Will return the full JSON associated with this event.
+
+Response codes:
+
+* 200: Success.
+A JSON representing this event is returned.
+* 400: Invalid group name or `insertionId`
+* 404: No event with this `insertionId`
+
+=== Deleting an event
+
+----
+curl -XDELETE http://ip:port/events/deadLetter/groups/org.apache.james.mailbox.events.EventBusTestFixture$GroupA/6e0dd59d-660e-4d9b-b22f-0354479f47b4
+----
+
+Will delete this event.
+
+Response codes:
+
+* 204: Success
+* 400: Invalid group name or `insertionId`
+
+=== Redeliver all events
+
+----
+curl -XPOST http://ip:port/events/deadLetter?action=redeliver
+----
+
+Will create a task that will attempt to redeliver all events stored in "Event Dead Letter".
+If successful, redelivered events will then be removed from "Dead Letter".
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: the taskId of the created task
+* 400: Invalid action argument
+
+=== Redeliver group events
+
+----
+curl -XPOST http://ip:port/events/deadLetter/groups/org.apache.james.mailbox.events.EventBusTestFixture$GroupA
+----
+
+Will create a task that will attempt to redeliver all events of a particular group stored in "Event Dead Letter".
+If successful, redelivered events will then be removed from "Dead Letter".
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: the taskId of the created task
+* 400: Invalid group name or action argument
+
+=== Redeliver a single event
+
+----
+curl -XPOST http://ip:port/events/deadLetter/groups/org.apache.james.mailbox.events.EventBusTestFixture$GroupA/6e0dd59d-660e-4d9b-b22f-0354479f47b4?action=reDeliver
+----
+
+Will create a task that will attempt to redeliver a single event of a particular group stored in "Event Dead Letter".
+If successful, redelivered event will then be removed from "Dead Letter".
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes:
+
+* 201: the taskId of the created task
+* 400: Invalid group name, insertion id or action argument
+* 404: No event with this insertionId
+
+=== Rescheduling group execution
+
+Not implemented yet.
+
+== Deleted Messages Vault
+
+The 'Deleted Message Vault plugin' allows you to keep users deleted messages during a given retention time.
+This set of routes allow you to _restore_ users deleted messages or export them in an archive.
+
+To move deleted messages in the vault, you need to specifically configure the DeletedMessageVault PreDeletionHook.
+
+Here are the following actions available on the 'Deleted Messages Vault'
+
+* <<Restore_Deleted_Messages,Restore Deleted Messages>>
+* <<Export_Deleted_Messages,Export Deleted Messages>>
+* <<Purge_Deleted_Messages,Purge Deleted Messages>>
+* <<Permanently_Remove_Deleted_Message,Permanently Remove Deleted Message>>
+
+Note that the 'Deleted Messages Vault' feature is supported on top of all available Guice products.
+
+=== Restore Deleted Messages
+
+Deleted messages of a specific user can be restored by calling the following endpoint:
+
+----
+curl -XPOST http://ip:port/deletedMessages/users/userToRestore@domain.ext?action=restore
+
+{
+  "combinator": "and",
+  "criteria": [
+    {
+      "fieldName": "subject",
+      "operator": "containsIgnoreCase",
+      "value": "Apache James"
+    },
+    {
+      "fieldName": "deliveryDate",
+      "operator": "beforeOrEquals",
+      "value": "2014-10-30T14:12:00Z"
+    },
+    {
+      "fieldName": "deletionDate",
+      "operator": "afterOrEquals",
+      "value": "2015-10-20T09:08:00Z"
+    },
+    {
+      "fieldName": "recipients","
+      "operator": "contains","
+      "value": "recipient@james.org"
+    },
+    {
+      "fieldName": "hasAttachment",
+      "operator": "equals",
+      "value": "false"
+    },
+    {
+      "fieldName": "sender",
+      "operator": "equals",
+      "value": "sender@apache.org"
+    },
+    {
+      "fieldName": "originMailboxes",
+      "operator": "contains",
+      "value":  "02874f7c-d10e-102f-acda-0015176f7922"
+    }
+  ]
+};
+----
+
+The requested Json body is made from a list of criterion objects which have the following structure:
+
+----
+{
+  "fieldName": "supportedFieldName",
+  "operator": "supportedOperator",
+  "value": "A plain string representing the matching value of the corresponding field"
+}
+----
+
+Deleted Messages which are matched with the *all* criterion in the query body will be restored.
+Here are a list of supported fieldName for the restoring:
+
+* subject: represents for deleted message `subject` field matching.
+Supports below string operators:
+ ** contains
+ ** containsIgnoreCase
+ ** equals
+ ** equalsIgnoreCase
+* deliveryDate: represents for deleted message `deliveryDate` field matching.
+Tested value should follow the right date time with zone offset format (ISO-8601) like `2008-09-15T15:53:00+05:00` or `2008-09-15T15:53:00Z`  Supports below date time operators:
+ ** beforeOrEquals: is the deleted message's `deliveryDate` before or equals the time of tested value.
+ ** afterOrEquals: is the deleted message's `deliveryDate` after or equals the time of tested value
+* deletionDate: represents for deleted message `deletionDate` field matching.
+Tested value & Supports operators: similar to `deliveryDate`
+* sender: represents for deleted message `sender` field matching.
+Tested value should be a valid mail address.
+Supports mail address operator:
+ ** equals: does the tested sender equal to the sender of the tested deleted message ?
+* recipients: represents for deleted message `recipients` field matching.
+Tested value should be a valid mail address.
+Supports list mail address operator:
+ ** contains: does the tested deleted message's recipients contain tested recipient ?
+* hasAttachment: represents for deleted message `hasAttachment` field matching.
+Tested value could be `false` or `true`.
+Supports boolean operator:
+ ** equals: does the tested deleted message's hasAttachment property equal to the tested hasAttachment value?
+* originMailboxes: represents for deleted message `originMailboxes` field matching.
+Tested value is a string serialized of mailbox id.
+Supports list mailbox id operators:
+ ** contains: does the tested deleted message's originMailbox ids contain tested mailbox id ?
+
+Messages in the Deleted Messages Vault of a specified user that are matched with Query Json Object in the body will be appended to his 'Restored-Messages' mailbox, which will be created if needed.
+
+*Note*:
+
+* Query parameter `action` is required and should have the value `restore` to represent the restoring feature.
+Otherwise, a bad request response will be returned
+* Query parameter `action` is case sensitive
+* fieldName & operator passed to the routes are case sensitive
+* Currently, we only support query combinator `and` value, otherwise, requests will be rejected
+* If you only want to restore by only one criterion, the json body could be simplified to a single criterion:
+
+----
+{
+  "fieldName": "subject",
+  "operator": "containsIgnoreCase",
+  "value": "Apache James"
+}
+----
+
+* For restoring all deleted messages, passing a query json with an empty criterion list to represent `matching all deleted messages`:
+
+----
+{
+  "combinator": "and",
+  "criteria": []
+}
+----
+
+WARNING: Current web-admin uses `US` locale as the default.
+Therefore, there might be some conflicts when using String `containsIgnoreCase` comparators to apply  on the String data of other special locales stored in the Vault.
+More details at https://issues.apache.org/jira/browse/MAILBOX-384[JIRA]
+
+Response code:
+
+* 201: Task for restoring deleted has been created
+* 400: Bad request:
+ ** action query param is not present
+ ** action query param is not a valid action
+ ** user parameter is invalid
+ ** can not parse the JSON body
+ ** Json query object contains unsupported operator, fieldName
+ ** Json query object values violate parsing rules
+* 404: User not found
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+The scheduled task will have the following type `deleted-messages-restore` and the following `additionalInformation`:
+
+----
+{
+  "successfulRestoreCount": 47,
+  "errorRestoreCount": 0,
+  "user": "userToRestore@domain.ext"
+}
+----
+
+while:
+
+* successfulRestoreCount: number of restored messages
+* errorRestoreCount: number of messages that failed to restore
+* user: owner of deleted messages need to restore
+
+=== Export Deleted Messages
+
+Retrieve deleted messages matched with requested query from an user then share the content to a targeted mail address (exportTo)
+
+----
+curl -XPOST 'http://ip:port/deletedMessages/users/userExportFrom@domain.ext?action=export&exportTo=userReceiving@domain.ext'
+
+BODY: is the json query has the same structure with Restore Deleted Messages section
+----
+
+NOTE: Json query passing into the body follows the same rules & restrictions like in <<Restore_deleted_messages,Restore Deleted Messages>>
+
+Response code:
+
+* 201: Task for exporting has been created
+* 400: Bad request:
+ ** exportTo query param is not present
+ ** exportTo query param is not a valid mail address
+ ** action query param is not present
+ ** action query param is not a valid action
+ ** user parameter is invalid
+ ** can not parse the JSON body
+ ** Json query object contains unsupported operator, fieldName
+ ** Json query object values violate parsing rules
+* 404: User not found
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+The scheduled task will have the following type `deleted-messages-export` and the following `additionalInformation`:
+
+----
+{
+  "userExportFrom": "userToRestore@domain.ext",
+  "exportTo": "userReceiving@domain.ext",
+  "totalExportedMessages": 1432
+}
+----
+
+while:
+
+* userExportFrom: export deleted messages from this user
+* exportTo: content of deleted messages have been shared to this mail address
+* totalExportedMessages: number of deleted messages match with json query, then being shared to sharee
+
+=== Purge Deleted Messages
+
+You can overwrite 'retentionPeriod' configuration in 'deletedMessageVault' configuration file or use the default value of 1 year.
+
+Purge all deleted messages older than the configured 'retentionPeriod'
+
+----
+curl -XDELETE http://ip:port/deletedMessages?scope=expired
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response code:
+
+* 201: Task for purging has been created
+* 400: Bad request:
+ ** action query param is not present
+ ** action query param is not a valid action
+
+You may want to call this endpoint on a regular basis.
+
+=== Permanently Remove Deleted Message
+
+Delete a Deleted Message with `MessageId`
+
+----
+curl -XDELETE http://ip:port/deletedMessages/users/user@domain.ext/messages/3294a976-ce63-491e-bd52-1b6f465ed7a2
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response code:
+
+* 201: Task for deleting message has been created
+* 400: Bad request:
+ ** user parameter is invalid
+ ** messageId parameter is invalid
+* 404: User not found
+
+The scheduled task will have the following type `deleted-messages-delete` and the following `additionalInformation`:
+
+----
+ {
+   "userName": "user@domain.ext",
+   "messageId": "3294a976-ce63-491e-bd52-1b6f465ed7a2"
+ }
+----
+
+while:
+
+* user: delete deleted messages from this user
+* deleteMessageId: messageId of deleted messages will be delete
+
+== Task management
+
+Some webadmin features schedules tasks.
+The task management API allow to monitor and manage the execution of the following tasks.
+
+Note that the `taskId` used in the following APIs is returned by other WebAdmin APIs scheduling tasks.
+
+* <<Getting_a_task_details,Getting a task details>>
+* <<Awaiting_a_task,Awaiting a task>>
+* <<Cancelling_a_task,Cancelling a task>>
+* <<Listing_tasks,Listing tasks>>
+* <<Endpoints_returning_a_task,Endpoints returning a task>>
+
+=== Getting a task details
+
+----
+curl -XGET http://ip:port/tasks/3294a976-ce63-491e-bd52-1b6f465ed7a2
+----
+
+An Execution Report will be returned:
+
+----
+{
+    "submitDate": "2017-12-27T15:15:24.805+0700",
+    "startedDate": "2017-12-27T15:15:24.809+0700",
+    "completedDate": "2017-12-27T15:15:24.815+0700",
+    "cancelledDate": null,
+    "failedDate": null,
+    "taskId": "3294a976-ce63-491e-bd52-1b6f465ed7a2",
+    "additionalInformation": {},
+    "status": "completed",
+    "type": "type-of-the-task"
+}
+----
+
+Note that:
+
+* `status` can have the value:
+ ** `waiting`: The task is scheduled but its execution did not start yet
+ ** `inProgress`: The task is currently executed
+ ** `cancelled`: The task had been cancelled
+ ** `completed`: The task execution is finished, and this execution is a success
+ ** `failed`: The task execution is finished, and this execution is a failure
+* `additionalInformation` is a task specific object giving additional information and context about that task.
+The structure of this `additionalInformation` field is provided along the specific task submission endpoint.
+
+Response codes:
+
+* 200: The specific task was found and the execution report exposed above is returned
+* 400: Invalid task ID
+* 404: Task ID was not found
+
+=== Awaiting a task
+
+One can await the end of a task, then receive it's final execution report.
+
+That feature is especially usefull for testing purpose but still can serve real-life scenari.
+
+----
+curl -XGET http://ip:port/tasks/3294a976-ce63-491e-bd52-1b6f465ed7a2/await?timeout=duration
+----
+
+An Execution Report will be returned.
+
+`timeout` is optional.
+By default it is set to 365 days (the maximum value).
+The expected value is expressed in the following format: `Nunit`.
+`N` should be strictly positive.
+`unit` could be either in the short form (`s`, `m`, `h`, etc.), or in the long form (`day`, `week`, `month`, etc.).
+
+Examples:
+
+* `30s`
+* `5m`
+* `7d`
+* `1y`
+
+Response codes:
+
+* 200: The specific task was found and the execution report exposed above is returned
+* 400: Invalid task ID or invalid timeout
+* 404: Task ID was not found
+* 408: The timeout has been reached
+
+=== Cancelling a task
+
+You can cancel a task by calling:
+
+----
+curl -XDELETE http://ip:port/tasks/3294a976-ce63-491e-bd52-1b6f465ed7a2
+----
+
+Response codes:
+
+* 204: Task had been cancelled
+* 400: Invalid task ID
+
+=== Listing tasks
+
+A list of all tasks can be retrieved:
+
+----
+curl -XGET http://ip:port/tasks
+----
+
+Will return a list of Execution reports
+
+One can filter the above results by status.
+For example:
+
+----
+curl -XGET http://ip:port/tasks?status=inProgress
+----
+
+Will return a list of Execution reports that are currently in progress.
+
+Response codes:
+
+* 200: A list of corresponding tasks is returned
+* 400: Invalid status value
+
+=== Endpoints returning a task
+
+Many endpoints do generate a task.
+
+Example:
+
+----
+curl -XPOST /endpoint?action={action}
+----
+
+The response to these requests will be the scheduled `taskId` :
+
+----
+{"taskId":"5641376-02ed-47bd-bcc7-76ff6262d92a"}
+----
+
+Positionned headers:
+
+* Location header indicates the location of the resource associated with the scheduled task.
+Example:
+
+----
+Location: /tasks/3294a976-ce63-491e-bd52-1b6f465ed7a2
+----
+
+Response codes:
+
+* 201: Task generation succeeded.
+Corresponding task id is returned.
+* Other response codes might be returned depending on the endpoint
+
+The additional information returned depends on the scheduled task type and is documented in the endpoint documentation.
+
+== Cassandra extra operations
+
+Some webadmin features to manage some extra operations on Cassandra tables, like solving inconsistencies on projection tables.
+Such inconsistencies can be for example created by a fail of the DAO to add a mapping into 'mappings_sources``, while it was successful regarding the ``rrt` table.
+
+* <<Operations_on_mappings_sources,Operations on mappings sources>>
+
+=== Operations on mappings sources
+
+You can do a series of action on `mappings_sources` projection table :
+
+----
+curl -XPOST /cassandra/mappings?action={action}
+----
+
+Will return the taskId corresponding to the related task.
+Actions supported so far are :
+
+* SolveInconsistencies : cleans up first all the mappings in `mappings_sources` index and then repopulate it correctly.
+In the meantime, listing sources of a mapping might create temporary inconsistencies during the process.
+
+For example :
+
+----
+curl -XPOST /cassandra/mappings?action=SolveInconsistencies
+----
+
+<<Endpoints_returning_a_task,More details about endpoints returning a task>>.
+
+Response codes :
+
+* 201: the taskId of the created task
+* 400: Invalid action argument for performing operation on mappings data
diff --git a/migrate-markdown.sh b/migrate-markdown.sh
new file mode 100755
index 0000000..19bab55
--- /dev/null
+++ b/migrate-markdown.sh
@@ -0,0 +1,15 @@
+#!/bin/bash
+#
+# https://matthewsetter.com/technical-documentation/asciidoc/convert-markdown-to-asciidoc-with-kramdoc/
+#
+
+echo "Migrate site and mailet"
+
+mkdir -p docs/modules/migrated/pages
+find ./src/site/markdown -name "*.md" -type f -exec sh -c \
+    'echo "Convert {}" ; kramdoc --format=GFM --wrap=ventilate --output=./docs/modules/migrated/pages/{}.adoc {}' \;
+
+echo "Migrate ADR"
+mkdir -p docs/modules/development/pages/adr
+find ./src/adr -name "*.md" -type f -exec sh -c \
+    'echo "Convert {}" ; kramdoc --format=GFM --wrap=ventilate --output=./docs/modules/development/pages/adr/{}.adoc {}' \;
diff --git a/src/adr/0009-disable-elasticsearch-dynamic-mapping.md b/src/adr/0009-disable-elasticsearch-dynamic-mapping.md
index 5d75a43..a2008f5 100644
--- a/src/adr/0009-disable-elasticsearch-dynamic-mapping.md
+++ b/src/adr/0009-disable-elasticsearch-dynamic-mapping.md
@@ -11,9 +11,9 @@ Accepted (lazy consensus)
 We rely on dynamic mappings to expose our mail headers as a JSON map. Dynamic mapping is enabled for adding not yet encountered headers in the mapping.
 
 This causes a serie of functional issues:
- - Maximum field count can easily be exceeded
- - Field type 'guess' can be wrong, leading to subsequent headers omissions [1]
- - Document indexation needs to be paused at the index level during mapping changes to avoid concurrent changes, impacting negatively performance.
+* Maximum field count can easily be exceeded
+* Field type 'guess' can be wrong, leading to subsequent headers omissions (see JAMES-2078)
+* Document indexation needs to be paused at the index level during mapping changes to avoid concurrent changes, impacting negatively performance.
 
 ## Decision
 
@@ -23,14 +23,14 @@ Rely on nested objects to represent mail headers within a mapping
 
 The index needs to be re-created. Document reIndexation is needed.
 
-This solves the aforementionned bugs [1].
+This solves the aforementionned bugs (see JAMES-2078).
 
 Regarding performance:
- - Default message list performance is unimpacted
- - We noticed a 4% performance improvment upon indexing throughput
- - We noticed a 7% increase regarding space per message
+* Default message list performance is unimpacted
+* We noticed a 4% performance improvment upon indexing throughput
+* We noticed a 7% increase regarding space per message
 
 ## References
 
- - [1]: https://github.com/linagora/james-project/pull/2726 JAMES-2078 Add an integration test to prove that dynamic mapping can lead to ignored header fields
+ - [JAMES-2078](https://github.com/linagora/james-project/pull/2726) JAMES-2078 Add an integration test to prove that dynamic mapping can lead to ignored header fields
  - [JIRA](https://issues.apache.org/jira/browse/JAMES-2078)


---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org