You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by wo...@apache.org on 2020/11/02 19:15:37 UTC

[couchdb-documentation] 01/01: Remove content from master for clarity

This is an automated email from the ASF dual-hosted git repository.

wohali pushed a commit to branch goodbye-master
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git

commit 313c9b1b79341fd3841d3513c2f643b1c09e965d
Author: Joan Touzet <jo...@atypical.net>
AuthorDate: Mon Nov 2 14:15:20 2020 -0500

    Remove content from master for clarity
---
 .github/ISSUE_TEMPLATE.md               |   32 -
 .github/PULL_REQUEST_TEMPLATE.md        |   40 -
 .travis.yml                             |   25 -
 CONTRIBUTING.md                         |    4 -
 Jenkinsfile                             |   67 +-
 LICENSE                                 |  345 -----
 Makefile                                |   75 -
 NOTICE                                  |   31 -
 README.md                               |   32 +-
 ext/configdomain.py                     |  113 --
 ext/github.py                           |   46 -
 ext/httpdomain.py                       |  712 ----------
 ext/linter.py                           |  293 ----
 images/23379351593_0c480537de_q.jpg     |  Bin 15822 -> 0 bytes
 images/epub-icon.png                    |  Bin 19185 -> 0 bytes
 images/favicon.ico                      |  Bin 15086 -> 0 bytes
 images/futon-createdb.png               |  Bin 76194 -> 0 bytes
 images/futon-editdoc.png                |  Bin 52733 -> 0 bytes
 images/futon-editeddoc.png              |  Bin 56005 -> 0 bytes
 images/futon-overview.png               |  Bin 50815 -> 0 bytes
 images/futon-replform.png               |  Bin 52223 -> 0 bytes
 images/gf-gnome-rainbows.png            |  Bin 73847 -> 0 bytes
 images/intro-consistency-01.png         |  Bin 38005 -> 0 bytes
 images/intro-consistency-02.png         |  Bin 48145 -> 0 bytes
 images/intro-consistency-03.png         |  Bin 17333 -> 0 bytes
 images/intro-consistency-04.png         |  Bin 13744 -> 0 bytes
 images/intro-consistency-05.png         |  Bin 25592 -> 0 bytes
 images/intro-consistency-06.png         |  Bin 56651 -> 0 bytes
 images/intro-consistency-07.png         |  Bin 53634 -> 0 bytes
 images/intro-tour-01.png                |  Bin 50815 -> 0 bytes
 images/intro-tour-03.png                |  Bin 53795 -> 0 bytes
 images/intro-tour-04.png                |  Bin 56470 -> 0 bytes
 images/intro-tour-05.png                |  Bin 53778 -> 0 bytes
 images/intro-tour-06.png                |  Bin 61296 -> 0 bytes
 images/intro-tour-07.png                |  Bin 67121 -> 0 bytes
 images/intro-tour-08.png                |  Bin 83494 -> 0 bytes
 images/intro-tour-09.png                |  Bin 84701 -> 0 bytes
 images/intro-tour-10.png                |  Bin 61631 -> 0 bytes
 images/intro-why-01.png                 |  Bin 25755 -> 0 bytes
 images/intro-why-02.png                 |  Bin 5937 -> 0 bytes
 images/intro-why-03.png                 |  Bin 5134 -> 0 bytes
 images/logo.png                         |  Bin 14092 -> 0 bytes
 images/purge-checkpoint-docs.png        |  Bin 77925 -> 0 bytes
 images/replication-state-diagram.svg    |  419 ------
 images/rev-tree1.png                    |  Bin 13910 -> 0 bytes
 images/rev-tree2.png                    |  Bin 19104 -> 0 bytes
 images/rev-tree3.png                    |  Bin 10439 -> 0 bytes
 images/views-intro-01.png               |  Bin 1026767 -> 0 bytes
 images/views-intro-02.png               |  Bin 9758 -> 0 bytes
 images/views-intro-03.png               |  Bin 12650 -> 0 bytes
 images/views-intro-04.png               |  Bin 14537 -> 0 bytes
 make.bat                                |  253 ----
 rebar.config                            |   16 -
 requirements.txt                        |    2 -
 rfcs/001-fdb-revision-metadata-model.md |  215 ---
 rfcs/002-shard-splitting.md             |  373 -----
 rfcs/003-fdb-seq-index.md               |  244 ----
 rfcs/004-document-storage.md            |  251 ----
 rfcs/005-all-docs-index.md              |  207 ---
 rfcs/006-mango-fdb.md                   |  149 --
 rfcs/007-background-jobs.md             |  347 -----
 rfcs/008-map-indexes.md                 |  243 ----
 rfcs/009-exunit.md                      |  122 --
 rfcs/011-opentracing.md                 |  236 ----
 rfcs/012-fdb-reduce.md                  | 1096 ---------------
 rfcs/013-node-types.md                  |  143 --
 rfcs/015-background-index-building.md   |  131 --
 rfcs/016-fdb-replicator.md              |  384 -----
 rfcs/images/SkExample1.png              |  Bin 17085 -> 0 bytes
 rfcs/images/SkExample2.png              |  Bin 44835 -> 0 bytes
 rfcs/images/SkExample3.png              |  Bin 47555 -> 0 bytes
 rfcs/template.md                        |   85 --
 src/about.rst                           |   24 -
 src/api/basics.rst                      |  589 --------
 src/api/database/bulk-api.rst           | 1009 --------------
 src/api/database/changes.rst            |  750 ----------
 src/api/database/common.rst             |  468 -------
 src/api/database/compact.rst            |  246 ----
 src/api/database/find.rst               | 1387 ------------------
 src/api/database/index.rst              |   47 -
 src/api/database/misc.rst               |  504 -------
 src/api/database/security.rst           |  186 ---
 src/api/database/shard.rst              |  223 ---
 src/api/ddoc/common.rst                 |  221 ---
 src/api/ddoc/index.rst                  |   35 -
 src/api/ddoc/render.rst                 |  413 ------
 src/api/ddoc/rewrites.rst               |  192 ---
 src/api/ddoc/search.rst                 |  168 ---
 src/api/ddoc/views.rst                  |  918 ------------
 src/api/document/attachments.rst        |  315 -----
 src/api/document/common.rst             | 1214 ----------------
 src/api/document/index.rst              |   23 -
 src/api/index.rst                       |   42 -
 src/api/local.rst                       |  255 ----
 src/api/partitioned-dbs.rst             |  231 ---
 src/api/server/authn.rst                |  464 ------
 src/api/server/common.rst               | 2327 -------------------------------
 src/api/server/configuration.rst        |  327 -----
 src/api/server/index.rst                |   26 -
 src/best-practices/documents.rst        |  349 -----
 src/best-practices/forms.rst            |  143 --
 src/best-practices/index.rst            |   32 -
 src/best-practices/iso-date.rst         |   64 -
 src/best-practices/jsdevel.rst          |   42 -
 src/best-practices/reverse-proxies.rst  |  314 -----
 src/best-practices/views.rst            |   57 -
 src/cluster/databases.rst               |   85 --
 src/cluster/index.rst                   |   33 -
 src/cluster/nodes.rst                   |   89 --
 src/cluster/purging.rst                 |  185 ---
 src/cluster/sharding.rst                |  883 ------------
 src/cluster/theory.rst                  |   85 --
 src/conf.py                             |  121 --
 src/config/auth.rst                     |  314 -----
 src/config/cluster.rst                  |  123 --
 src/config/compaction.rst               |  167 ---
 src/config/couch-peruser.rst            |   46 -
 src/config/couchdb.rst                  |  232 ---
 src/config/http.rst                     |  643 ---------
 src/config/index.rst                    |   35 -
 src/config/indexbuilds.rst              |   67 -
 src/config/intro.rst                    |  172 ---
 src/config/ioq.rst                      |  109 --
 src/config/logging.rst                  |  139 --
 src/config/misc.rst                     |  271 ----
 src/config/query-servers.rst            |  272 ----
 src/config/replicator.rst               |  251 ----
 src/config/resharding.rst               |  105 --
 src/contributing.rst                    |  218 ---
 src/cve/2010-0009.rst                   |   53 -
 src/cve/2010-2234.rst                   |   61 -
 src/cve/2010-3854.rst                   |   55 -
 src/cve/2012-5641.rst                   |   75 -
 src/cve/2012-5649.rst                   |   49 -
 src/cve/2012-5650.rst                   |   68 -
 src/cve/2014-2668.rst                   |   52 -
 src/cve/2017-12635.rst                  |   67 -
 src/cve/2017-12636.rst                  |   54 -
 src/cve/2018-11769.rst                  |   60 -
 src/cve/2018-17188.rst                  |   67 -
 src/cve/2018-8007.rst                   |   58 -
 src/cve/2020-1955.rst                   |   60 -
 src/cve/index.rst                       |   73 -
 src/ddocs/ddocs.rst                     |  877 ------------
 src/ddocs/index.rst                     |   47 -
 src/ddocs/search.rst                    | 1054 --------------
 src/ddocs/views/collation.rst           |  264 ----
 src/ddocs/views/index.rst               |   29 -
 src/ddocs/views/intro.rst               |  728 ----------
 src/ddocs/views/joins.rst               |  431 ------
 src/ddocs/views/nosql.rst               |  529 -------
 src/ddocs/views/pagination.rst          |  267 ----
 src/docs.app.src                        |   18 -
 src/experimental.rst                    |   40 -
 src/fauxton/index.rst                   |   21 -
 src/fauxton/install.rst                 |   83 --
 src/index.rst                           |   59 -
 src/install/docker.rst                  |   43 -
 src/install/freebsd.rst                 |   86 --
 src/install/index.rst                   |   31 -
 src/install/kubernetes.rst              |   35 -
 src/install/mac.rst                     |   77 -
 src/install/search.rst                  |  110 --
 src/install/snap.rst                    |   47 -
 src/install/troubleshooting.rst         |  358 -----
 src/install/unix.rst                    |  465 ------
 src/install/upgrading.rst               |   83 --
 src/install/windows.rst                 |  104 --
 src/intro/api.rst                       |  747 ----------
 src/intro/consistency.rst               |  443 ------
 src/intro/curl.rst                      |  145 --
 src/intro/index.rst                     |   52 -
 src/intro/overview.rst                  |  361 -----
 src/intro/security.rst                  |  566 --------
 src/intro/tour.rst                      |  409 ------
 src/intro/why.rst                       |  300 ----
 src/json-structure.rst                  |  687 ---------
 src/maintenance/backups.rst             |   90 --
 src/maintenance/compaction.rst          |  335 -----
 src/maintenance/index.rst               |   21 -
 src/maintenance/performance.rst         |  326 -----
 src/partitioned-dbs/index.rst           |  390 ------
 src/query-server/erlang.rst             |  136 --
 src/query-server/index.rst              |   37 -
 src/query-server/javascript.rst         |  278 ----
 src/query-server/protocol.rst           | 1060 --------------
 src/replication/conflicts.rst           |  787 -----------
 src/replication/index.rst               |   37 -
 src/replication/intro.rst               |  140 --
 src/replication/protocol.rst            | 1898 -------------------------
 src/replication/replicator.rst          |  653 ---------
 src/setup/cluster.rst                   |  365 -----
 src/setup/index.rst                     |   27 -
 src/setup/single-node.rst               |   57 -
 src/whatsnew/0.10.rst                   |  143 --
 src/whatsnew/0.11.rst                   |  349 -----
 src/whatsnew/0.8.rst                    |  175 ---
 src/whatsnew/0.9.rst                    |  263 ----
 src/whatsnew/1.0.rst                    |  269 ----
 src/whatsnew/1.1.rst                    |  170 ---
 src/whatsnew/1.2.rst                    |  235 ----
 src/whatsnew/1.3.rst                    |  258 ----
 src/whatsnew/1.4.rst                    |   62 -
 src/whatsnew/1.5.rst                    |   60 -
 src/whatsnew/1.6.rst                    |   72 -
 src/whatsnew/1.7.rst                    |  117 --
 src/whatsnew/2.0.rst                    |  151 --
 src/whatsnew/2.1.rst                    |  454 ------
 src/whatsnew/2.2.rst                    |  343 -----
 src/whatsnew/2.3.rst                    |  316 -----
 src/whatsnew/3.0.rst                    |  706 ----------
 src/whatsnew/3.1.rst                    |  137 --
 src/whatsnew/index.rst                  |   40 -
 static/css/rtd_theme.css                |   60 -
 templates/layout.html                   |   55 -
 templates/pages/download.html           |   48 -
 templates/pages/index.html              |  195 ---
 217 files changed, 14 insertions(+), 46730 deletions(-)

diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md
deleted file mode 100644
index be4f816..0000000
--- a/.github/ISSUE_TEMPLATE.md
+++ /dev/null
@@ -1,32 +0,0 @@
-<!--- Provide a general summary of the issue in the Title above -->
-
-## Expected Behavior
-<!--- If you're describing a bug, tell us what should happen -->
-<!--- If you're suggesting a change/improvement, tell us how it should work -->
-
-## Current Behavior
-<!--- If describing a bug, tell us what happens instead of the expected behavior -->
-<!--- If suggesting a change/improvement, explain the difference from current behavior -->
-
-## Possible Solution
-<!--- Not obligatory, but suggest a fix/reason for the bug, -->
-<!--- or ideas how to implement the addition or change -->
-
-## Steps to Reproduce (for bugs)
-<!--- Provide a link to a live example, or an unambiguous set of steps to -->
-<!--- reproduce this bug. Include code to reproduce, if relevant -->
-1.
-2.
-3.
-4.
-
-## Context
-<!--- How has this issue affected you? What are you trying to accomplish? -->
-<!--- Providing context helps us come up with a solution that is most useful in the real world -->
-
-## Your Environment
-<!--- Include as many relevant details about the environment you experienced the bug in -->
-* Version used:
-* Browser Name and version:
-* Operating System and version (desktop or mobile):
-* Link to your project:
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index 7a14b07..0000000
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,40 +0,0 @@
-<!-- Thank you for your contribution!
-
-     Please file this form by replacing the Markdown comments
-     with your text. If a section needs no action - remove it.
-
-     Also remember, that CouchDB uses the Review-Then-Commit (RTC) model
-     of code collaboration. Positive feedback is represented +1 from committers
-     and negative is a -1. The -1 also means veto, and needs to be addressed
-     to proceed. Once there are no objections, the PR can be merged by a
-     CouchDB committer.
-
-     See: http://couchdb.apache.org/bylaws.html#decisions for more info. -->
-
-## Overview
-
-<!-- Please give a short brief for the pull request,
-     what problem it solves or how it makes things better. -->
-
-## Testing recommendations
-
-<!-- Describe how we can test your changes.
-     Does it provides any behaviour that the end users
-     could notice? -->
-
-## GitHub issue number
-
-<!-- If this is a significant change, please file a separate issue at:
-     https://github.com/apache/couchdb-documentation/issues
-     and include the number here and in commit message(s) using
-     syntax like "Fixes #472" or "Fixes apache/couchdb#472".  -->
-
-## Related Pull Requests
-
-<!-- If your changes affects multiple components in different
-     repositories please put links to those pull requests here.  -->
-
-## Checklist
-
-- [ ] Update [rebar.config.script](https://github.com/apache/couchdb/blob/master/rebar.config.script) with the commit hash once this PR is rebased and merged
-<!-- Before opening the PR, consider running `make check` locally for a faster turnaround time -->
diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index 7bfff38..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,25 +0,0 @@
-language: python
-python:
-  - 3.6
-
-# start a push build on master and release branches + PRs build on every branch
-# Avoid double build on PRs (See https://github.com/travis-ci/travis-ci/issues/1147)
-branches:
-  only:
-    - master
-    - /^\d+\.x\.x$/
-    - /^\d+\.\d+\.x$/
-
-install:
-  - pip install -r requirements.txt
-
-script:
-  - make ${TARGET}
-
-env:
-  matrix:
-    - TARGET=html
-    - TARGET=man
-    - TARGET=check
-
-cache: apt
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index 55c41d6..0000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,4 +0,0 @@
-This repository follows the same contribution guidelines as the
-main Apache CouchDB contribution guidelines:
-
-https://github.com/apache/couchdb/blob/master/CONTRIBUTING.md
diff --git a/Jenkinsfile b/Jenkinsfile
index 9dae139..82a4d1b 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -1,59 +1,16 @@
+#!groovy
+//
+// This is here to allow the build to pass.
+//
 pipeline {
 
-  agent none
-
-  environment {
-    GIT_COMMITTER_NAME = 'Jenkins User'
-    GIT_COMMITTER_EMAIL = 'couchdb@apache.org'
-    DOCKER_IMAGE = 'couchdbdev/debian-buster-erlang-all:latest'
-    DOCKER_ARGS = '-e npm_config_cache=npm-cache -e HOME=. -v=/etc/passwd:/etc/passwd -v /etc/group:/etc/group'
-  }
-
-  options {
-    buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '10'))
-    // This fails the build immediately if any parallel step fails
-    parallelsAlwaysFailFast()
-    preserveStashes(buildCount: 10)
-    timeout(time: 30, unit: 'MINUTES')
-    timestamps()
-  }
+  agent any
 
   stages {
-    stage('Test') {
-      matrix {
-        axes {
-          axis {
-            name 'TARGET'
-            values "html", "man", "check"
-          }
-        }
-        stages {
-          stage('Test') {
-            agent {
-              docker {
-                image "${DOCKER_IMAGE}"
-                label 'docker'
-                args "${DOCKER_ARGS}"
-                alwaysPull true
-              }
-            }
-            options {
-              timeout(time: 90, unit: 'MINUTES')
-            }
-            steps {
-              sh '''
-                make ${TARGET}
-              '''
-            }
-            post {
-              cleanup {
-                // UGH see https://issues.jenkins-ci.org/browse/JENKINS-41894
-                sh 'rm -rf ${WORKSPACE}/*'
-              }
-            }
-          } // stage
-        } // stages
-      } // matrix
-    } // stage "Test"
-  } // stages
-} // pipeline
+    stage('Pass') {
+      steps {
+        echo "Passing..."
+      }
+    }
+  }
+}
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index ee1813e..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,345 +0,0 @@
-
-                                Apache License
-                          Version 2.0, January 2004
-                       http://www.apache.org/licenses/
-
-  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-  1. Definitions.
-
-     "License" shall mean the terms and conditions for use, reproduction,
-     and distribution as defined by Sections 1 through 9 of this document.
-
-     "Licensor" shall mean the copyright owner or entity authorized by
-     the copyright owner that is granting the License.
-
-     "Legal Entity" shall mean the union of the acting entity and all
-     other entities that control, are controlled by, or are under common
-     control with that entity. For the purposes of this definition,
-     "control" means (i) the power, direct or indirect, to cause the
-     direction or management of such entity, whether by contract or
-     otherwise, or (ii) ownership of fifty percent (50%) or more of the
-     outstanding shares, or (iii) beneficial ownership of such entity.
-
-     "You" (or "Your") shall mean an individual or Legal Entity
-     exercising permissions granted by this License.
-
-     "Source" form shall mean the preferred form for making modifications,
-     including but not limited to software source code, documentation
-     source, and configuration files.
-
-     "Object" form shall mean any form resulting from mechanical
-     transformation or translation of a Source form, including but
-     not limited to compiled object code, generated documentation,
-     and conversions to other media types.
-
-     "Work" shall mean the work of authorship, whether in Source or
-     Object form, made available under the License, as indicated by a
-     copyright notice that is included in or attached to the work
-     (an example is provided in the Appendix below).
-
-     "Derivative Works" shall mean any work, whether in Source or Object
-     form, that is based on (or derived from) the Work and for which the
-     editorial revisions, annotations, elaborations, or other modifications
-     represent, as a whole, an original work of authorship. For the purposes
-     of this License, Derivative Works shall not include works that remain
-     separable from, or merely link (or bind by name) to the interfaces of,
-     the Work and Derivative Works thereof.
-
-     "Contribution" shall mean any work of authorship, including
-     the original version of the Work and any modifications or additions
-     to that Work or Derivative Works thereof, that is intentionally
-     submitted to Licensor for inclusion in the Work by the copyright owner
-     or by an individual or Legal Entity authorized to submit on behalf of
-     the copyright owner. For the purposes of this definition, "submitted"
-     means any form of electronic, verbal, or written communication sent
-     to the Licensor or its representatives, including but not limited to
-     communication on electronic mailing lists, source code control systems,
-     and issue tracking systems that are managed by, or on behalf of, the
-     Licensor for the purpose of discussing and improving the Work, but
-     excluding communication that is conspicuously marked or otherwise
-     designated in writing by the copyright owner as "Not a Contribution."
-
-     "Contributor" shall mean Licensor and any individual or Legal Entity
-     on behalf of whom a Contribution has been received by Licensor and
-     subsequently incorporated within the Work.
-
-  2. Grant of Copyright License. Subject to the terms and conditions of
-     this License, each Contributor hereby grants to You a perpetual,
-     worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-     copyright license to reproduce, prepare Derivative Works of,
-     publicly display, publicly perform, sublicense, and distribute the
-     Work and such Derivative Works in Source or Object form.
-
-  3. Grant of Patent License. Subject to the terms and conditions of
-     this License, each Contributor hereby grants to You a perpetual,
-     worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-     (except as stated in this section) patent license to make, have made,
-     use, offer to sell, sell, import, and otherwise transfer the Work,
-     where such license applies only to those patent claims licensable
-     by such Contributor that are necessarily infringed by their
-     Contribution(s) alone or by combination of their Contribution(s)
-     with the Work to which such Contribution(s) was submitted. If You
-     institute patent litigation against any entity (including a
-     cross-claim or counterclaim in a lawsuit) alleging that the Work
-     or a Contribution incorporated within the Work constitutes direct
-     or contributory patent infringement, then any patent licenses
-     granted to You under this License for that Work shall terminate
-     as of the date such litigation is filed.
-
-  4. Redistribution. You may reproduce and distribute copies of the
-     Work or Derivative Works thereof in any medium, with or without
-     modifications, and in Source or Object form, provided that You
-     meet the following conditions:
-
-     (a) You must give any other recipients of the Work or
-         Derivative Works a copy of this License; and
-
-     (b) You must cause any modified files to carry prominent notices
-         stating that You changed the files; and
-
-     (c) You must retain, in the Source form of any Derivative Works
-         that You distribute, all copyright, patent, trademark, and
-         attribution notices from the Source form of the Work,
-         excluding those notices that do not pertain to any part of
-         the Derivative Works; and
-
-     (d) If the Work includes a "NOTICE" text file as part of its
-         distribution, then any Derivative Works that You distribute must
-         include a readable copy of the attribution notices contained
-         within such NOTICE file, excluding those notices that do not
-         pertain to any part of the Derivative Works, in at least one
-         of the following places: within a NOTICE text file distributed
-         as part of the Derivative Works; within the Source form or
-         documentation, if provided along with the Derivative Works; or,
-         within a display generated by the Derivative Works, if and
-         wherever such third-party notices normally appear. The contents
-         of the NOTICE file are for informational purposes only and
-         do not modify the License. You may add Your own attribution
-         notices within Derivative Works that You distribute, alongside
-         or as an addendum to the NOTICE text from the Work, provided
-         that such additional attribution notices cannot be construed
-         as modifying the License.
-
-     You may add Your own copyright statement to Your modifications and
-     may provide additional or different license terms and conditions
-     for use, reproduction, or distribution of Your modifications, or
-     for any such Derivative Works as a whole, provided Your use,
-     reproduction, and distribution of the Work otherwise complies with
-     the conditions stated in this License.
-
-  5. Submission of Contributions. Unless You explicitly state otherwise,
-     any Contribution intentionally submitted for inclusion in the Work
-     by You to the Licensor shall be under the terms and conditions of
-     this License, without any additional terms or conditions.
-     Notwithstanding the above, nothing herein shall supersede or modify
-     the terms of any separate license agreement you may have executed
-     with Licensor regarding such Contributions.
-
-  6. Trademarks. This License does not grant permission to use the trade
-     names, trademarks, service marks, or product names of the Licensor,
-     except as required for reasonable and customary use in describing the
-     origin of the Work and reproducing the content of the NOTICE file.
-
-  7. Disclaimer of Warranty. Unless required by applicable law or
-     agreed to in writing, Licensor provides the Work (and each
-     Contributor provides its Contributions) on an "AS IS" BASIS,
-     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-     implied, including, without limitation, any warranties or conditions
-     of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-     PARTICULAR PURPOSE. You are solely responsible for determining the
-     appropriateness of using or redistributing the Work and assume any
-     risks associated with Your exercise of permissions under this License.
-
-  8. Limitation of Liability. In no event and under no legal theory,
-     whether in tort (including negligence), contract, or otherwise,
-     unless required by applicable law (such as deliberate and grossly
-     negligent acts) or agreed to in writing, shall any Contributor be
-     liable to You for damages, including any direct, indirect, special,
-     incidental, or consequential damages of any character arising as a
-     result of this License or out of the use or inability to use the
-     Work (including but not limited to damages for loss of goodwill,
-     work stoppage, computer failure or malfunction, or any and all
-     other commercial damages or losses), even if such Contributor
-     has been advised of the possibility of such damages.
-
-  9. Accepting Warranty or Additional Liability. While redistributing
-     the Work or Derivative Works thereof, You may choose to offer,
-     and charge a fee for, acceptance of support, warranty, indemnity,
-     or other liability obligations and/or rights consistent with this
-     License. However, in accepting such obligations, You may act only
-     on Your own behalf and on Your sole responsibility, not on behalf
-     of any other Contributor, and only if You agree to indemnify,
-     defend, and hold each Contributor harmless for any liability
-     incurred by, or claims asserted against, such Contributor by reason
-     of your accepting any such warranty or additional liability.
-
-  END OF TERMS AND CONDITIONS
-
-  APPENDIX: How to apply the Apache License to your work.
-
-     To apply the Apache License to your work, attach the following
-     boilerplate notice, with the fields enclosed by brackets "[]"
-     replaced with your own identifying information. (Don't include
-     the brackets!)  The text should be enclosed in the appropriate
-     comment syntax for the file format. We also recommend that a
-     file or class name and description of purpose be included on the
-     same "printed page" as the copyright notice for easier
-     identification within third-party archives.
-
-  Copyright [yyyy] [name of copyright owner]
-
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-
-Apache CouchDB Subcomponents
-
-The Apache CouchDB project includes a number of subcomponents with separate
-copyright notices and license terms. Your use of the code for the these
-subcomponents is subject to the terms and conditions of the following licenses.
-
-For the build/html/_static components:
-
-  Copyright (c) 2007-2011 by the Sphinx team (see AUTHORS file).
-  All rights reserved.
-
-  Redistribution and use in source and binary forms, with or without
-  modification, are permitted provided that the following conditions are
-  met:
-
-  * Redistributions of source code must retain the above copyright
-    notice, this list of conditions and the following disclaimer.
-
-  * Redistributions in binary form must reproduce the above copyright
-    notice, this list of conditions and the following disclaimer in the
-    documentation and/or other materials provided with the distribution.
-
-  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-For the build/html/_static/jquery.js component:
-
-  Copyright 2010, John Resig
-
-  Copyright 2010, The Dojo Foundation
-
-  Copyright 2012 jQuery Foundation and other contributors
-  http://jquery.com/
-
-  Permission is hereby granted, free of charge, to any person obtaining
-  a copy of this software and associated documentation files (the
-  "Software"), to deal in the Software without restriction, including
-  without limitation the rights to use, copy, modify, merge, publish,
-  distribute, sublicense, and/or sell copies of the Software, and to
-  permit persons to whom the Software is furnished to do so, subject to
-  the following conditions:
-
-  The above copyright notice and this permission notice shall be
-  included in all copies or substantial portions of the Software.
-
-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-  EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-  MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-  NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-  LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-  OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-  WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
-For the build/html/_static/underscore.js component:
-
-  Copyright (c) 2009-2012 Jeremy Ashkenas, DocumentCloud
-
-  Permission is hereby granted, free of charge, to any person
-  obtaining a copy of this software and associated documentation
-  files (the "Software"), to deal in the Software without
-  restriction, including without limitation the rights to use,
-  copy, modify, merge, publish, distribute, sublicense, and/or sell
-  copies of the Software, and to permit persons to whom the
-  Software is furnished to do so, subject to the following
-  conditions:
-
-  The above copyright notice and this permission notice shall be
-  included in all copies or substantial portions of the Software.
-
-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-  EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
-  OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-  NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
-  HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
-  WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-  FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
-  OTHER DEALINGS IN THE SOFTWARE.
-
-For the static/rtd.css component:
-
-  Copyright (c) 2007-2011 by the Sphinx team (see AUTHORS file).
-  All rights reserved.
-
-  Redistribution and use in source and binary forms, with or without
-  modification, are permitted provided that the following conditions are
-  met:
-
-  * Redistributions of source code must retain the above copyright
-    notice, this list of conditions and the following disclaimer.
-
-  * Redistributions in binary form must reproduce the above copyright
-    notice, this list of conditions and the following disclaimer in the
-    documentation and/or other materials provided with the distribution.
-
-  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-For ext/httpdomain.py
-
-Copyright (c) 2010 by the contributors Hong Minhee <mi...@dahlia.kr>.
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
-* Redistributions of source code must retain the above copyright
-  notice, this list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright
-  notice, this list of conditions and the following disclaimer in the
-  documentation and/or other materials provided with the distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/Makefile b/Makefile
deleted file mode 100644
index d9b157a..0000000
--- a/Makefile
+++ /dev/null
@@ -1,75 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-SPHINXBUILD  := sphinx-build
-TEX          := tex
-PDFLATEX     := pdflatex
-MAKEINFO     := makeinfo
-
-BUILDDIR     := build
-SOURCE       := src/
-PAPERSIZE    := -D latex_elements.papersize=a4
-SPHINXFLAGS  := -a -W -n -A local=1 $(PAPERSIZE) -d $(BUILDDIR)/doctree
-SPHINXOPTS   := $(SPHINXFLAGS) $(SOURCE)
-
-ENSURECMD=which $(1) > /dev/null 2>&1 || (echo "*** Make sure that $(1) is installed and on your path" && exit 1)
-
-all: html man
-
-clean:
-	rm -rf $(BUILDDIR)
-
-html: $(SPHINXBUILD)
-	$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
-
-latex: $(TEX)
-	$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
-
-pdf: latex $(PDFLATEX)
-	$(MAKE) LATEXOPTS=' -interaction=batchmode ' -C $(BUILDDIR)/latex all-pdf
-
-info: $(SPHINXBUILD) $(MAKEINFO)
-	$(SPHINXBUILD) -b texinfo $(SPHINXOPTS) $(BUILDDIR)/texinfo
-	make -C $(BUILDDIR)/texinfo info
-
-man: $(SPHINXBUILD)
-	$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
-
-check:
-	python3 ext/linter.py $(SOURCE)
-
-install-html:
-install-pdf:
-install-info:
-install-man:
-
-install: install-html install-pdf install-info install-man
-	# copy-files
-
-distclean: clean
-	# delete-installed-files
-
-
-$(SPHINXBUILD):
-	@$(call ENSURECMD,$@)
-
-$(TEX):
-	@$(call ENSURECMD,$@)
-
-$(PDFLATEX):
-	@$(call ENSURECMD,$@)
-
-$(MAKEINFO):
-	@$(call ENSURECMD,$@)
-
-$(PYTHON):
-	@$(call ENSURECMD,$@)
diff --git a/NOTICE b/NOTICE
deleted file mode 100644
index f093282..0000000
--- a/NOTICE
+++ /dev/null
@@ -1,31 +0,0 @@
-Apache CouchDB
-Copyright 2009-2014 The Apache Software Foundation
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
-
-This product also includes the following third-party components:
-
- * Sphinx (http://sphinx-doc.org/)
-
-   Copyright 2011, the Sphinx team
-
- * httpdomain.py (https://bitbucket.org/birkenfeld/sphinx-contrib/src/6a3a8ca714cfce957530890d0431d9a7b88c930f/httpdomain/sphinxcontrib/httpdomain.py?at=httpdomain-1.1.9)
-
-   Copyright (c) 2010, Hong Minhee <mi...@dahlia.kr>
-
- * src/externals.rst (http://davispj.com/2010/09/26/new-couchdb-externals-api.html)
-
-   Copyright 2008-2010, Paul Joseph Davis <pa...@gmail.com>
-
- * src/ddocs/views/intro.rst src/ddocs/views/nosql.rst src/ddocs/views/pagination.rst
-
-   Copyright 2013, Creative Commons Attribution license
-
- * src/ddocs/views/joins.rst (Using View Collation)
-
-   Copyright 2007, Christopher Lenz <cm...@gmail.com>
-
- * templates/couchdb/domainindex.html
-
-   Copyright 2007-2011 by the Sphinx team
diff --git a/README.md b/README.md
index fd4efc4..b1a2053 100644
--- a/README.md
+++ b/README.md
@@ -1,31 +1,3 @@
-# CouchDB Documentation [![Build Status](https://travis-ci.org/apache/couchdb-documentation.svg?branch=master)](https://travis-ci.org/apache/couchdb-documentation)
-
-This repository contains the Sphinx source for Apache CouchDB's documentation.
-You can view the latest rendered build of this content at:
-
-    http://docs.couchdb.org/en/latest
-
-# Building this repo
-
-Install Python3 and pip. Then:
-
-```sh
-$ python3 -m venv .venv
-$ source .venv/bin/activate
-$ pip install -r requirements.txt
-$ make html # builds the docs
-$ make check # syntax checks the docs
-```
-
-# Feedback, Issues, Contributing
-
-General feedback is welcome at our [user][1] or [developer][2] mailing lists.
-
-Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started
-with issue reporting or contributing to the upkeep of this project.
-
-[1]: http://mail-archives.apache.org/mod_mbox/couchdb-user/
-[2]: http://mail-archives.apache.org/mod_mbox/couchdb-dev/
-[3]: https://github.com/apache/couchdb/blob/master/CONTRIBUTING.md
-
+# CouchDB Documentation
 
+You are on the wrong branch. You want the main branch instead.
diff --git a/ext/configdomain.py b/ext/configdomain.py
deleted file mode 100644
index 66ed532..0000000
--- a/ext/configdomain.py
+++ /dev/null
@@ -1,113 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-from sphinx import addnodes
-from sphinx.roles import XRefRole
-from sphinx.domains import Domain, ObjType, Index
-from sphinx.directives import ObjectDescription
-from sphinx.util.nodes import make_refnode
-
-
-class ConfigObject(ObjectDescription):
-    def handle_signature(self, sig, signode):
-        if "::" in sig:
-            name, descr = map(lambda i: i.strip(), sig.split("::"))
-        else:
-            name, descr = sig.strip(), ""
-
-        signode["name"] = name
-        signode["descr"] = descr
-
-        domain, objtype = self.name.split(":")
-        if objtype == "section":
-            self.env.temp_data["section"] = signode["name"]
-            name = "[%s]" % signode["name"]
-
-        signode += addnodes.desc_name(name, name)
-
-        return signode["name"]
-
-    def needs_arglist(self):
-        return False
-
-    def add_target_and_index(self, name, sig, signode):
-        section = self.env.temp_data["section"]
-        domain, objtype = self.name.split(":")
-        data = self.env.domaindata[domain][objtype]
-        if objtype == "section":
-            data[name] = (self.env.docname, signode["descr"])
-            signode["ids"].append(signode["name"])
-        elif objtype == "option":
-            idx = "%s/%s" % (section, signode["name"])
-            data[idx] = (self.env.docname, signode["descr"])
-            signode["ids"].append(idx)
-        else:
-            assert "unknown object type %r" % objtype
-
-
-class ConfigIndex(Index):
-
-    name = "ref"
-    localname = "Configuration Quick Reference"
-    shortname = "Config Quick Reference"
-
-    def generate(self, docnames=None):
-        content = dict(
-            (name, [(name, 1, info[0], name, "", "", info[1])])
-            for name, info in self.domain.data["section"].items()
-        )
-
-        options = self.domain.data["option"]
-        for idx, info in sorted(options.items()):
-            path, descr = info
-            section, name = idx.split("/", 1)
-            content[section].append(
-                (name, 2, path, "%s/%s" % (section, name), "", "", descr)
-            )
-
-        return (sorted(content.items()), False)
-
-
-class ConfigDomain(Domain):
-
-    name = "config"
-    label = "CONFIG"
-
-    object_types = {
-        "section": ObjType("section", "section", "obj"),
-        "option": ObjType("option", "option", "obj"),
-    }
-
-    directives = {"section": ConfigObject, "option": ConfigObject}
-
-    roles = {"section": XRefRole(), "option": XRefRole()}
-
-    initial_data = {"section": {}, "option": {}}
-
-    indices = [ConfigIndex]
-
-    def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
-        if typ == "section":
-            info = self.data[typ][target]
-            title = "[%s]" % target
-        elif typ == "option":
-            assert "/" in target, "option without section: %r" % target
-            section, option = target.split("/", 1)
-            info = self.data[typ][target]
-            title = option
-        else:
-            assert "unknown role %r for target %r" % (typ, target)
-        return make_refnode(builder, fromdocname, info[0], target, contnode, title)
-
-
-def setup(app):
-    app.add_domain(ConfigDomain)
diff --git a/ext/github.py b/ext/github.py
deleted file mode 100644
index f812d9e..0000000
--- a/ext/github.py
+++ /dev/null
@@ -1,46 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-import os
-
-
-def get_github_url(app, view, path):
-    return "https://github.com/{project}/{view}/{branch}/{path}".format(
-        project=app.config.github_project,
-        view=view,
-        branch=app.config.github_branch,
-        path=path,
-    )
-
-
-def html_page_context(app, pagename, templatename, context, doctree):
-    # base template for common sphinx pages like search or genindex
-    # there is no need to provide github show/edit links for them
-    if templatename != "page.html":
-        return
-
-    # ok, I'm aware about that this is wrong way to concat url segments
-    # but this is one is most portable between 2.x and 3.x versions
-    # plus it fits our current requirements. But still, patches are welcome (:
-    path = os.path.join(
-        app.config.github_docs_path,
-        os.path.relpath(doctree.get("source"), app.builder.srcdir),
-    )
-    context["github_show_url"] = get_github_url(app, "blob", path)
-    context["github_edit_url"] = get_github_url(app, "edit", path)
-
-
-def setup(app):
-    app.add_config_value("github_project", "", True)
-    app.add_config_value("github_branch", "master", True)
-    app.add_config_value("github_docs_path", "", True)
-    app.connect("html-page-context", html_page_context)
diff --git a/ext/httpdomain.py b/ext/httpdomain.py
deleted file mode 100644
index 5e8803d..0000000
--- a/ext/httpdomain.py
+++ /dev/null
@@ -1,712 +0,0 @@
-"""
-    sphinxcontrib.httpdomain
-    ~~~~~~~~~~~~~~~~~~~~~~~~
-
-    The HTTP domain for documenting RESTful HTTP APIs.
-
-    :copyright: Copyright 2011 by Hong Minhee
-    :license: BSD, see LICENSE for details.
-
-"""
-
-import re
-
-from docutils import nodes
-from docutils.parsers.rst.roles import set_classes
-
-from pygments.lexer import RegexLexer, bygroups
-from pygments.lexers import get_lexer_by_name
-from pygments.token import Literal, Text, Operator, Keyword, Name, Number
-from pygments.util import ClassNotFound
-
-from sphinx import addnodes
-from sphinx.roles import XRefRole
-from sphinx.domains import Domain, ObjType, Index
-from sphinx.directives import ObjectDescription, directives
-from sphinx.util.nodes import make_refnode
-from sphinx.util.docfields import GroupedField, TypedField
-
-
-class DocRef(object):
-    """Represents a link to an RFC which defines an HTTP method."""
-
-    def __init__(self, base_url, anchor, section):
-        """Stores the specified attributes which represent a URL which links to
-        an RFC which defines an HTTP method.
-
-        """
-        self.base_url = base_url
-        self.anchor = anchor
-        self.section = section
-
-    def __repr__(self):
-        """Returns the URL which this object represents, which points to the
-        location of the RFC which defines some HTTP method.
-
-        """
-        return "{0}#{1}{2}".format(self.base_url, self.anchor, self.section)
-
-
-class RFC2616Ref(DocRef):
-    def __init__(self, section):
-        url = "http://www.w3.org/Protocols/rfc2616/rfc2616-sec{0:d}.html"
-        url = url.format(int(section))
-        super(RFC2616Ref, self).__init__(url, "sec", section)
-
-
-class IETFRef(DocRef):
-    def __init__(self, rfc, section):
-        url = "http://tools.ietf.org/html/rfc{0:d}".format(rfc)
-        super(IETFRef, self).__init__(url, "section-", section)
-
-
-class EventSourceRef(DocRef):
-    def __init__(self, section):
-        url = "http://www.w3.org/TR/eventsource/"
-        super(EventSourceRef, self).__init__(url, section, "")
-
-
-#: Mapping from lowercase HTTP method name to :class:`DocRef` object which
-#: maintains the URL which points to the section of the RFC which defines that
-#: HTTP method.
-METHOD_REFS = {
-    "patch": IETFRef(5789, 2),
-    "options": RFC2616Ref(9.2),
-    "get": RFC2616Ref(9.3),
-    "head": RFC2616Ref(9.4),
-    "post": RFC2616Ref(9.5),
-    "put": RFC2616Ref(9.6),
-    "delete": RFC2616Ref(9.7),
-    "trace": RFC2616Ref(9.8),
-    "connect": RFC2616Ref(9.9),
-    "copy": IETFRef(2518, 8.8),
-    "any": "",
-}
-
-#: Mapping from HTTP header name to :class:`DocRef` object which
-#: maintains the URL which points to the related section of the RFC.
-HEADER_REFS = {
-    "Accept": RFC2616Ref(14.1),
-    "Accept-Charset": RFC2616Ref(14.2),
-    "Accept-Encoding": RFC2616Ref(14.3),
-    "Accept-Language": RFC2616Ref(14.4),
-    "Accept-Ranges": RFC2616Ref(14.5),
-    "Age": RFC2616Ref(14.6),
-    "Allow": RFC2616Ref(14.7),
-    "Authorization": RFC2616Ref(14.8),
-    "Cache-Control": RFC2616Ref(14.9),
-    "Cookie": IETFRef(2109, "4.3.4"),
-    "Connection": RFC2616Ref(14.10),
-    "Content-Encoding": RFC2616Ref(14.11),
-    "Content-Language": RFC2616Ref(14.12),
-    "Content-Length": RFC2616Ref(14.13),
-    "Content-Location": RFC2616Ref(14.14),
-    "Content-MD5": RFC2616Ref(14.15),
-    "Content-Range": RFC2616Ref(14.16),
-    "Content-Type": RFC2616Ref(14.17),
-    "Date": RFC2616Ref(14.18),
-    "Destination": IETFRef(2518, 9.3),
-    "ETag": RFC2616Ref(14.19),
-    "Expect": RFC2616Ref(14.20),
-    "Expires": RFC2616Ref(14.21),
-    "From": RFC2616Ref(14.22),
-    "Host": RFC2616Ref(14.23),
-    "If-Match": RFC2616Ref(14.24),
-    "If-Modified-Since": RFC2616Ref(14.25),
-    "If-None-Match": RFC2616Ref(14.26),
-    "If-Range": RFC2616Ref(14.27),
-    "If-Unmodified-Since": RFC2616Ref(14.28),
-    "Last-Event-ID": EventSourceRef("last-event-id"),
-    "Last-Modified": RFC2616Ref(14.29),
-    "Location": RFC2616Ref(14.30),
-    "Max-Forwards": RFC2616Ref(14.31),
-    "Pragma": RFC2616Ref(14.32),
-    "Proxy-Authenticate": RFC2616Ref(14.33),
-    "Proxy-Authorization": RFC2616Ref(14.34),
-    "Range": RFC2616Ref(14.35),
-    "Referer": RFC2616Ref(14.36),
-    "Retry-After": RFC2616Ref(14.37),
-    "Server": RFC2616Ref(14.38),
-    "Set-Cookie": IETFRef(2109, "4.2.2"),
-    "TE": RFC2616Ref(14.39),
-    "Trailer": RFC2616Ref(14.40),
-    "Transfer-Encoding": RFC2616Ref(14.41),
-    "Upgrade": RFC2616Ref(14.42),
-    "User-Agent": RFC2616Ref(14.43),
-    "Vary": RFC2616Ref(14.44),
-    "Via": RFC2616Ref(14.45),
-    "Warning": RFC2616Ref(14.46),
-    "WWW-Authenticate": RFC2616Ref(14.47),
-}
-
-
-HTTP_STATUS_CODES = {
-    100: "Continue",
-    101: "Switching Protocols",
-    102: "Processing",
-    200: "OK",
-    201: "Created",
-    202: "Accepted",
-    203: "Non Authoritative Information",
-    204: "No Content",
-    205: "Reset Content",
-    206: "Partial Content",
-    207: "Multi Status",
-    226: "IM Used",  # see RFC 3229
-    300: "Multiple Choices",
-    301: "Moved Permanently",
-    302: "Found",
-    303: "See Other",
-    304: "Not Modified",
-    305: "Use Proxy",
-    307: "Temporary Redirect",
-    400: "Bad Request",
-    401: "Unauthorized",
-    402: "Payment Required",  # unused
-    403: "Forbidden",
-    404: "Not Found",
-    405: "Method Not Allowed",
-    406: "Not Acceptable",
-    407: "Proxy Authentication Required",
-    408: "Request Timeout",
-    409: "Conflict",
-    410: "Gone",
-    411: "Length Required",
-    412: "Precondition Failed",
-    413: "Request Entity Too Large",
-    414: "Request URI Too Long",
-    415: "Unsupported Media Type",
-    416: "Requested Range Not Satisfiable",
-    417: "Expectation Failed",
-    418: "I'm a teapot",  # see RFC 2324
-    422: "Unprocessable Entity",
-    423: "Locked",
-    424: "Failed Dependency",
-    426: "Upgrade Required",
-    449: "Retry With",  # proprietary MS extension
-    500: "Internal Server Error",
-    501: "Not Implemented",
-    502: "Bad Gateway",
-    503: "Service Unavailable",
-    504: "Gateway Timeout",
-    505: "HTTP Version Not Supported",
-    507: "Insufficient Storage",
-    510: "Not Extended",
-}
-
-http_sig_param_re = re.compile(
-    r"\((?:(?P<type>[^:)]+):)?(?P<name>[\w_]+)\)", re.VERBOSE
-)
-
-
-def sort_by_method(entries):
-    def cmp(item):
-        order = ["HEAD", "GET", "POST", "PUT", "DELETE", "COPY", "OPTIONS"]
-        method = item[0].split(" ", 1)[0]
-        if method in order:
-            return order.index(method)
-        return 100
-
-    return sorted(entries, key=cmp)
-
-
-def http_resource_anchor(method, path):
-    path = re.sub(r"[{}]", "", re.sub(r"[<>:/]", "-", path))
-    return method.lower() + "-" + path
-
-
-class HTTPResource(ObjectDescription):
-
-    doc_field_types = [
-        TypedField(
-            "parameter",
-            label="Parameters",
-            names=("param", "parameter", "arg", "argument"),
-            typerolename="obj",
-            typenames=("paramtype", "type"),
-        ),
-        TypedField(
-            "jsonobject",
-            label="JSON Object",
-            names=("jsonparameter", "jsonparam", "json"),
-            typerolename="obj",
-            typenames=("jsonparamtype", "jsontype"),
-        ),
-        TypedField(
-            "requestjsonobject",
-            label="Request JSON Object",
-            names=("reqjsonobj", "reqjson", "<jsonobj", "<json"),
-            typerolename="obj",
-            typenames=("reqjsontype", "<jsontype"),
-        ),
-        TypedField(
-            "requestjsonarray",
-            label="Request JSON Array of Objects",
-            names=("reqjsonarr", "<jsonarr"),
-            typerolename="obj",
-            typenames=("reqjsonarrtype", "<jsonarrtype"),
-        ),
-        TypedField(
-            "responsejsonobject",
-            label="Response JSON Object",
-            names=("resjsonobj", "resjson", ">jsonobj", ">json"),
-            typerolename="obj",
-            typenames=("resjsontype", ">jsontype"),
-        ),
-        TypedField(
-            "responsejsonarray",
-            label="Response JSON Array of Objects",
-            names=("resjsonarr", ">jsonarr"),
-            typerolename="obj",
-            typenames=("resjsonarrtype", ">jsonarrtype"),
-        ),
-        TypedField(
-            "queryparameter",
-            label="Query Parameters",
-            names=("queryparameter", "queryparam", "qparam", "query"),
-            typerolename="obj",
-            typenames=("queryparamtype", "querytype", "qtype"),
-        ),
-        GroupedField(
-            "formparameter",
-            label="Form Parameters",
-            names=("formparameter", "formparam", "fparam", "form"),
-        ),
-        GroupedField(
-            "requestheader",
-            label="Request Headers",
-            rolename="mailheader",
-            names=("<header", "reqheader", "requestheader"),
-        ),
-        GroupedField(
-            "responseheader",
-            label="Response Headers",
-            rolename="mailheader",
-            names=(">header", "resheader", "responseheader"),
-        ),
-        GroupedField(
-            "statuscode",
-            label="Status Codes",
-            rolename="statuscode",
-            names=("statuscode", "status", "code"),
-        ),
-    ]
-
-    option_spec = {
-        "deprecated": directives.flag,
-        "noindex": directives.flag,
-        "synopsis": lambda x: x,
-    }
-
-    method = NotImplemented
-
-    def handle_signature(self, sig, signode):
-        method = self.method.upper() + " "
-        signode += addnodes.desc_name(method, method)
-        offset = 0
-        path = None
-        for match in http_sig_param_re.finditer(sig):
-            path = sig[offset : match.start()]
-            signode += addnodes.desc_name(path, path)
-            params = addnodes.desc_parameterlist()
-            typ = match.group("type")
-            if typ:
-                typ += ": "
-                params += addnodes.desc_annotation(typ, typ)
-            name = match.group("name")
-            params += addnodes.desc_parameter(name, name)
-            signode += params
-            offset = match.end()
-        if offset < len(sig):
-            path = sig[offset : len(sig)]
-            signode += addnodes.desc_name(path, path)
-        if path is None:
-            assert False, "no matches for sig: %s" % sig
-        fullname = self.method.upper() + " " + path
-        signode["method"] = self.method
-        signode["path"] = sig
-        signode["fullname"] = fullname
-        return (fullname, self.method, sig)
-
-    def needs_arglist(self):
-        return False
-
-    def add_target_and_index(self, name_cls, sig, signode):
-        signode["ids"].append(http_resource_anchor(*name_cls[1:]))
-        if "noindex" not in self.options:
-            self.env.domaindata["http"][self.method][sig] = (
-                self.env.docname,
-                self.options.get("synopsis", ""),
-                "deprecated" in self.options,
-            )
-
-    def get_index_text(self, modname, name):
-        return ""
-
-
-class HTTPOptions(HTTPResource):
-
-    method = "options"
-
-
-class HTTPHead(HTTPResource):
-
-    method = "head"
-
-
-class HTTPPatch(HTTPResource):
-
-    method = "patch"
-
-
-class HTTPPost(HTTPResource):
-
-    method = "post"
-
-
-class HTTPGet(HTTPResource):
-
-    method = "get"
-
-
-class HTTPPut(HTTPResource):
-
-    method = "put"
-
-
-class HTTPDelete(HTTPResource):
-
-    method = "delete"
-
-
-class HTTPTrace(HTTPResource):
-
-    method = "trace"
-
-
-class HTTPCopy(HTTPResource):
-
-    method = "copy"
-
-
-class HTTPAny(HTTPResource):
-
-    method = "any"
-
-
-def http_statuscode_role(
-    name, rawtext, text, lineno, inliner, options=None, content=None
-):
-    if options is None:
-        options = {}
-    if content is None:
-        content = []
-    if text.isdigit():
-        code = int(text)
-        try:
-            status = HTTP_STATUS_CODES[code]
-        except KeyError:
-            msg = inliner.reporter.error(
-                "%d is invalid HTTP status code" % code, lineno=lineno
-            )
-            prb = inliner.problematic(rawtext, rawtext, msg)
-            return [prb], [msg]
-    else:
-        try:
-            code, status = re.split(r"\s", text.strip(), 1)
-            code = int(code)
-        except ValueError:
-            msg = inliner.reporter.error(
-                "HTTP status code must be an integer (e.g. `200`) or "
-                "start with an integer (e.g. `200 OK`); %r is invalid" % text,
-                line=lineno,
-            )
-            prb = inliner.problematic(rawtext, rawtext, msg)
-            return [prb], [msg]
-    nodes.reference(rawtext)
-    if code == 226:
-        url = "http://www.ietf.org/rfc/rfc3229.txt"
-    elif code == 418:
-        url = "http://www.ietf.org/rfc/rfc2324.txt"
-    elif code == 449:
-        url = "http://msdn.microsoft.com/en-us/library/dd891478(v=prot.10).aspx"
-    elif code in HTTP_STATUS_CODES:
-        url = "http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html" "#sec10." + (
-            "%d.%d" % (code // 100, 1 + code % 100)
-        )
-    else:
-        url = ""
-    set_classes(options)
-    node = nodes.reference(rawtext, "%d %s" % (code, status), refuri=url, **options)
-    return [node], []
-
-
-def http_method_role(name, rawtext, text, lineno, inliner, options=None, content=None):
-    if options is None:
-        options = {}
-    if content is None:
-        content = []
-    method = str(text).lower()
-    if method not in METHOD_REFS:
-        msg = inliner.reporter.error(
-            "%s is not valid HTTP method" % method, lineno=lineno
-        )
-        prb = inliner.problematic(rawtext, rawtext, msg)
-        return [prb], [msg]
-    url = str(METHOD_REFS[method])
-    node = nodes.reference(rawtext, method.upper(), refuri=url, **options)
-    return [node], []
-
-
-def http_header_role(name, rawtext, text, lineno, inliner, options=None, content=None):
-    if options is None:
-        options = {}
-    if content is None:
-        content = []
-    header = str(text)
-    if header not in HEADER_REFS:
-        header = header.title()
-    if header not in HEADER_REFS:
-        if header.startswith(("X-Couch-", "Couch-")):
-            return [nodes.strong(header, header)], []
-        msg = inliner.reporter.error(
-            "%s is not unknown HTTP header" % header, lineno=lineno
-        )
-        prb = inliner.problematic(rawtext, rawtext, msg)
-        return [prb], [msg]
-    url = str(HEADER_REFS[header])
-    node = nodes.reference(rawtext, header, refuri=url, **options)
-    return [node], []
-
-
-class HTTPXRefRole(XRefRole):
-    def __init__(self, method, **kwargs):
-        XRefRole.__init__(self, **kwargs)
-        self.method = method
-
-    def process_link(self, env, refnode, has_explicit_title, title, target):
-        if not target.startswith("/"):
-            pass
-        if not has_explicit_title:
-            title = self.method.upper() + " " + title
-        return title, target
-
-
-class HTTPIndex(Index):
-
-    name = "api"
-    localname = "API Quick Reference"
-    shortname = "API Reference"
-
-    def generate(self, docnames=None):
-        content = {}
-        items = (
-            (method, path, info)
-            for method, routes in self.domain.routes.items()
-            for path, info in routes.items()
-        )
-        items = sorted(items, key=lambda item: item[1])
-        for method, path, info in items:
-            entries = content.setdefault(path, [])
-            entry_name = method.upper() + " " + path
-            entries.append(
-                [
-                    entry_name,
-                    0,
-                    info[0],
-                    http_resource_anchor(method, path),
-                    "",
-                    "Deprecated" if info[2] else "",
-                    info[1],
-                ]
-            )
-        items = sorted(
-            (path, sort_by_method(entries)) for path, entries in content.items()
-        )
-        return (items, True)
-
-
-class HTTPDomain(Domain):
-    """HTTP domain."""
-
-    name = "http"
-    label = "HTTP"
-
-    object_types = {
-        "options": ObjType("options", "options", "obj"),
-        "head": ObjType("head", "head", "obj"),
-        "post": ObjType("post", "post", "obj"),
-        "get": ObjType("get", "get", "obj"),
-        "put": ObjType("put", "put", "obj"),
-        "patch": ObjType("patch", "patch", "obj"),
-        "delete": ObjType("delete", "delete", "obj"),
-        "trace": ObjType("trace", "trace", "obj"),
-        "copy": ObjType("copy", "copy", "obj"),
-        "any": ObjType("any", "any", "obj"),
-    }
-
-    directives = {
-        "options": HTTPOptions,
-        "head": HTTPHead,
-        "post": HTTPPost,
-        "get": HTTPGet,
-        "put": HTTPPut,
-        "patch": HTTPPatch,
-        "delete": HTTPDelete,
-        "trace": HTTPTrace,
-        "copy": HTTPCopy,
-        "any": HTTPAny,
-    }
-
-    roles = {
-        "options": HTTPXRefRole("options"),
-        "head": HTTPXRefRole("head"),
-        "post": HTTPXRefRole("post"),
-        "get": HTTPXRefRole("get"),
-        "put": HTTPXRefRole("put"),
-        "patch": HTTPXRefRole("patch"),
-        "delete": HTTPXRefRole("delete"),
-        "trace": HTTPXRefRole("trace"),
-        "copy": HTTPXRefRole("copy"),
-        "all": HTTPXRefRole("all"),
-        "statuscode": http_statuscode_role,
-        "method": http_method_role,
-        "header": http_header_role,
-    }
-
-    initial_data = {
-        "options": {},  # path: (docname, synopsis)
-        "head": {},
-        "post": {},
-        "get": {},
-        "put": {},
-        "patch": {},
-        "delete": {},
-        "trace": {},
-        "copy": {},
-        "any": {},
-    }
-
-    indices = [HTTPIndex]
-
-    @property
-    def routes(self):
-        return dict((key, self.data[key]) for key in self.object_types)
-
-    def clear_doc(self, docname):
-        for typ, routes in self.routes.items():
-            for path, info in list(routes.items()):
-                if info[0] == docname:
-                    del routes[path]
-
-    def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
-        try:
-            info = self.data[str(typ)][target]
-        except KeyError:
-            text = contnode.rawsource
-            if typ == "statuscode":
-                return http_statuscode_role(None, text, text, None, None)[0][0]
-            elif typ == "mailheader":
-                return http_header_role(None, text, text, None, None)[0][0]
-            else:
-                return nodes.emphasis(text, text)
-        else:
-            anchor = http_resource_anchor(typ, target)
-            title = typ.upper() + " " + target
-            return make_refnode(builder, fromdocname, info[0], anchor, contnode, title)
-
-    def get_objects(self):
-        for method, routes in self.routes.items():
-            for path, info in routes.items():
-                anchor = http_resource_anchor(method, path)
-                yield (path, path, method, info[0], anchor, 1)
-
-
-class HTTPLexer(RegexLexer):
-    """Lexer for HTTP sessions."""
-
-    name = "HTTP"
-    aliases = ["http"]
-
-    flags = re.DOTALL
-
-    def header_callback(self, match):
-        if match.group(1).lower() == "content-type":
-            content_type = match.group(5).strip()
-            if ";" in content_type:
-                content_type = content_type[: content_type.find(";")].strip()
-            self.content_type = content_type
-        yield match.start(1), Name.Attribute, match.group(1)
-        yield match.start(2), Text, match.group(2)
-        yield match.start(3), Operator, match.group(3)
-        yield match.start(4), Text, match.group(4)
-        yield match.start(5), Literal, match.group(5)
-        yield match.start(6), Text, match.group(6)
-
-    def continuous_header_callback(self, match):
-        yield match.start(1), Text, match.group(1)
-        yield match.start(2), Literal, match.group(2)
-        yield match.start(3), Text, match.group(3)
-
-    def content_callback(self, match):
-        content_type = getattr(self, "content_type", None)
-        content = match.group()
-        offset = match.start()
-        if content_type:
-            from pygments.lexers import get_lexer_for_mimetype
-
-            try:
-                lexer = get_lexer_for_mimetype(content_type)
-            except ClassNotFound:
-                pass
-            else:
-                for idx, token, value in lexer.get_tokens_unprocessed(content):
-                    yield offset + idx, token, value
-                return
-        yield offset, Text, content
-
-    tokens = {
-        "root": [
-            (
-                r"(GET|POST|PUT|PATCH|DELETE|HEAD|OPTIONS|TRACE|COPY)"
-                r"( +)([^ ]+)( +)"
-                r"(HTTPS?)(/)(1\.[01])(\r?\n|$)",
-                bygroups(
-                    Name.Function,
-                    Text,
-                    Name.Namespace,
-                    Text,
-                    Keyword.Reserved,
-                    Operator,
-                    Number,
-                    Text,
-                ),
-                "headers",
-            ),
-            (
-                r"(HTTPS?)(/)(1\.[01])( +)(\d{3})( +)([^\r\n]+)(\r?\n|$)",
-                bygroups(
-                    Keyword.Reserved,
-                    Operator,
-                    Number,
-                    Text,
-                    Number,
-                    Text,
-                    Name.Exception,
-                    Text,
-                ),
-                "headers",
-            ),
-        ],
-        "headers": [
-            (r"([^\s:]+)( *)(:)( *)([^\r\n]+)(\r?\n|$)", header_callback),
-            (r"([\t ]+)([^\r\n]+)(\r?\n|$)", continuous_header_callback),
-            (r"\r?\n", Text, "content"),
-        ],
-        "content": [(r".+", content_callback)],
-    }
-
-
-def setup(app):
-    app.add_domain(HTTPDomain)
-    app.add_lexer("http", HTTPLexer())
diff --git a/ext/linter.py b/ext/linter.py
deleted file mode 100644
index 74b4d9d..0000000
--- a/ext/linter.py
+++ /dev/null
@@ -1,293 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-
-# This is very-very-very simple linter made in one evening without thoughts of
-# making something great, but just a thing that works.
-
-import os
-import re
-
-
-RULES = []
-HAS_ERRORS = False
-IGNORE_ERROR = False
-
-
-def error_report(file, line, msg, _state=[]):
-    global HAS_ERRORS, IGNORE_ERROR
-    if IGNORE_ERROR:
-        return
-    if _state and _state[0] == file.name:
-        pass
-    else:
-        if _state:
-            _state[0] = file.name
-        else:
-            _state.append(file.name)
-        sys.stderr.write(file.name + "\n")
-    sys.stderr.write(" ".join(["  line", str(line), ":", msg]) + "\n")
-    HAS_ERRORS = True
-
-
-def register_rule(func):
-    RULES.append(func)
-    return func
-
-
-def main(path):
-    for file in iter_rst_files(os.path.abspath(path)):
-        validate(file)
-    sys.exit(HAS_ERRORS)
-
-
-def iter_rst_files(path):
-    if os.path.isfile(path):
-        with open(path) as f:
-            yield f
-        return
-    for root, dirs, files in os.walk(path):
-        for file in files:
-            if file.endswith(".rst"):
-                with open(os.path.join(root, file), "rb") as f:
-                    yield f
-
-
-def validate(file):
-    global IGNORE_ERROR
-    IGNORE_ERROR = False
-    rules = [rule(file) for rule in RULES]
-    for rule in rules:
-        for _ in rule:
-            # initialize coroutine
-            break
-    while True:
-        line = file.readline().decode("utf-8")
-        exhausted = []
-        for idx, rule in enumerate(rules):
-            try:
-                error = rule.send(line)
-            except StopIteration:
-                exhausted.append(rule)
-            else:
-                if error:
-                    error_report(*error)
-
-        # not very optimal, but I'm lazy to figure anything better
-        for rule in exhausted:
-            rules.pop(rules.index(rule))
-
-        if not line:
-            break
-
-
-@register_rule
-def silent_scream(file):
-    """Sometimes we must accept presence of some errors by some relevant
-    reasons. Here we're doing that."""
-    global IGNORE_ERROR
-    counter = 0
-    while True:
-        line = yield None
-        if not line:
-            break
-
-        if counter:
-            IGNORE_ERROR = True
-            counter -= 1
-        else:
-            IGNORE_ERROR = False
-
-        match = re.match("\s*.. lint: ignore errors for the next (\d+) line?", line)
-        if match:
-            # +1 for empty line right after comment
-            counter = int(match.group(1)) + 1
-
-
-@register_rule
-def license_adviser(file):
-    """Each source file must include ASF license header."""
-    header = iter(
-        """
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-""".lstrip().splitlines(
-            False
-        )
-    )
-    error = None
-    for n, hline in enumerate(header):
-        fline = yield error
-        error = None
-        if hline != fline.strip("\r\n"):
-            error = (
-                file,
-                n + 1,
-                "bad ASF license header\n"
-                "  expected: {0}\n"
-                "  found:    {1}".format(hline, fline.strip()),
-            )
-
-
-@register_rule
-def whitespace_committee(file):
-    """Whitespace committee takes care about whitespace (surprise!) characters
-    in files. The documentation style guide says:
-
-    - There should be no trailing white space;
-    - More than one emtpy lines are not allowed and there shouldn't be such
-      at the end of file;
-    - The last line should ends with newline character
-
-    Additionally it alerts about for tabs if they were used instead of spaces.
-
-    TODO: check for indention
-    """
-    error = prev = None
-    n = 0
-    while True:
-        line = yield error
-        error = None
-        if not line:
-            break
-        n += 1
-
-        # Check for trailing whitespace
-        if line.strip("\r\n").endswith(" "):
-            error = (file, n + 1, "trailing whitespace detected!\n" "{0}".format(line))
-
-        # Check for continuous empty lines
-        if prev is not None:
-            if prev.strip() == line.strip() == "":
-                error = (file, n + 1, "too many empty lines")
-
-        # Nobody loves tabs-spaces cocktail, we prefer spaces
-        if "\t" in line:
-            error = (file, n + 1, "no tabs please")
-
-        prev = line
-
-    # Accidentally empty file committed?
-    if prev is None:
-        error = (file, 0, "oh no! file seems empty!")
-
-    # Empty last lines not welcome
-    elif prev.strip() == "":
-        error = (file, n + 1, "no empty last lines please")
-
-    # Last line should ends with newline character
-    elif not prev.endswith("\n"):
-        error = (file, n + 1, "last line should ends with newline character")
-
-    yield error
-    return
-
-
-@register_rule
-def line_length_checker(file):
-    """Use a modern max line length of 90 chars, as recommended by things like
-    https://github.com/ambv/black and https://youtu.be/wf-BqAjZb8M?t=260 .
-    """
-    in_code_block = False
-    seen_emptyline = False
-    n = 0
-    error = None
-    while True:
-        line = yield error
-        error = None
-        if not line:
-            break
-        n += 1
-        line = line.rstrip()
-
-        # We have to ignore stuff in code blocks since it's hard to keep it
-        # within 90 chars wide box.
-        if line.strip().startswith(".. code") or line.endswith("::"):
-            in_code_block = True
-            continue
-
-        # Check for line length unless we're not in code block
-        if len(line) > 90 and not in_code_block:
-            if line.startswith(".."):
-                # Ignore long lines with external links
-                continue
-
-            if line.endswith(">`_"):
-                # Ignore long lines because of URLs
-                # TODO: need to be more smart here
-                continue
-
-            error = (
-                file,
-                n + 1,
-                "too long ({0} > 90) line\n{1}\n" "".format(len(line), line),
-            )
-
-        # Empty lines are acts as separators for code block content
-        elif not line:
-            seen_emptyline = True
-
-        # So if we saw an empty line and here goes content without indention,
-        # so it mostly have to sign about the end of our code block
-        # (if it ever occurs)
-        elif seen_emptyline and line and not line.startswith(" "):
-            seen_emptyline = False
-            in_code_block = False
-
-        else:
-            seen_emptyline = False
-
-
-@register_rule
-def my_lovely_hat(file):
-    """Everyone loves to wear a nice hat on they head, so articles does too."""
-    error = None
-    n = 0
-    while True:
-        line = yield error
-        error = None
-        if not line:
-            break
-
-        line = line.strip()
-
-        if not line:
-            continue
-
-        if line.startswith(".."):
-            continue
-
-        if set(line) < set(["#", "-", "=", "*"]):
-            break
-        else:
-            lines = [line, "\n", (yield None), (yield None)]
-            yield (file, n + 1, "bad title header:\n" "{}".format("".join(lines)))
-            return
-
-
-if __name__ == "__main__":
-    import sys
-
-    if len(sys.argv) == 1:
-        sys.stderr.write("Argument with the target path is missed")
-        sys.exit(2)
-    main(sys.argv[1])
diff --git a/images/23379351593_0c480537de_q.jpg b/images/23379351593_0c480537de_q.jpg
deleted file mode 100644
index e004b13..0000000
Binary files a/images/23379351593_0c480537de_q.jpg and /dev/null differ
diff --git a/images/epub-icon.png b/images/epub-icon.png
deleted file mode 100644
index 3fda935..0000000
Binary files a/images/epub-icon.png and /dev/null differ
diff --git a/images/favicon.ico b/images/favicon.ico
deleted file mode 100644
index e538aea..0000000
Binary files a/images/favicon.ico and /dev/null differ
diff --git a/images/futon-createdb.png b/images/futon-createdb.png
deleted file mode 100644
index c8c1b9d..0000000
Binary files a/images/futon-createdb.png and /dev/null differ
diff --git a/images/futon-editdoc.png b/images/futon-editdoc.png
deleted file mode 100644
index f31dbbe..0000000
Binary files a/images/futon-editdoc.png and /dev/null differ
diff --git a/images/futon-editeddoc.png b/images/futon-editeddoc.png
deleted file mode 100644
index a5913bc..0000000
Binary files a/images/futon-editeddoc.png and /dev/null differ
diff --git a/images/futon-overview.png b/images/futon-overview.png
deleted file mode 100644
index e1daf5c..0000000
Binary files a/images/futon-overview.png and /dev/null differ
diff --git a/images/futon-replform.png b/images/futon-replform.png
deleted file mode 100644
index 72b9ff5..0000000
Binary files a/images/futon-replform.png and /dev/null differ
diff --git a/images/gf-gnome-rainbows.png b/images/gf-gnome-rainbows.png
deleted file mode 100644
index 07c7145..0000000
Binary files a/images/gf-gnome-rainbows.png and /dev/null differ
diff --git a/images/intro-consistency-01.png b/images/intro-consistency-01.png
deleted file mode 100644
index a577059..0000000
Binary files a/images/intro-consistency-01.png and /dev/null differ
diff --git a/images/intro-consistency-02.png b/images/intro-consistency-02.png
deleted file mode 100644
index 06c23ea..0000000
Binary files a/images/intro-consistency-02.png and /dev/null differ
diff --git a/images/intro-consistency-03.png b/images/intro-consistency-03.png
deleted file mode 100644
index 2164c6c..0000000
Binary files a/images/intro-consistency-03.png and /dev/null differ
diff --git a/images/intro-consistency-04.png b/images/intro-consistency-04.png
deleted file mode 100644
index 068fa77..0000000
Binary files a/images/intro-consistency-04.png and /dev/null differ
diff --git a/images/intro-consistency-05.png b/images/intro-consistency-05.png
deleted file mode 100644
index a94f9c3..0000000
Binary files a/images/intro-consistency-05.png and /dev/null differ
diff --git a/images/intro-consistency-06.png b/images/intro-consistency-06.png
deleted file mode 100644
index af316d4..0000000
Binary files a/images/intro-consistency-06.png and /dev/null differ
diff --git a/images/intro-consistency-07.png b/images/intro-consistency-07.png
deleted file mode 100644
index 7fb5027..0000000
Binary files a/images/intro-consistency-07.png and /dev/null differ
diff --git a/images/intro-tour-01.png b/images/intro-tour-01.png
deleted file mode 100644
index e6fe9df..0000000
Binary files a/images/intro-tour-01.png and /dev/null differ
diff --git a/images/intro-tour-03.png b/images/intro-tour-03.png
deleted file mode 100644
index 7137583..0000000
Binary files a/images/intro-tour-03.png and /dev/null differ
diff --git a/images/intro-tour-04.png b/images/intro-tour-04.png
deleted file mode 100644
index 7bc5678..0000000
Binary files a/images/intro-tour-04.png and /dev/null differ
diff --git a/images/intro-tour-05.png b/images/intro-tour-05.png
deleted file mode 100644
index 972cb65..0000000
Binary files a/images/intro-tour-05.png and /dev/null differ
diff --git a/images/intro-tour-06.png b/images/intro-tour-06.png
deleted file mode 100644
index 9f27df1..0000000
Binary files a/images/intro-tour-06.png and /dev/null differ
diff --git a/images/intro-tour-07.png b/images/intro-tour-07.png
deleted file mode 100644
index 229ce63..0000000
Binary files a/images/intro-tour-07.png and /dev/null differ
diff --git a/images/intro-tour-08.png b/images/intro-tour-08.png
deleted file mode 100644
index 4aa549b..0000000
Binary files a/images/intro-tour-08.png and /dev/null differ
diff --git a/images/intro-tour-09.png b/images/intro-tour-09.png
deleted file mode 100644
index b850ade..0000000
Binary files a/images/intro-tour-09.png and /dev/null differ
diff --git a/images/intro-tour-10.png b/images/intro-tour-10.png
deleted file mode 100644
index 68038bf..0000000
Binary files a/images/intro-tour-10.png and /dev/null differ
diff --git a/images/intro-why-01.png b/images/intro-why-01.png
deleted file mode 100644
index c927450..0000000
Binary files a/images/intro-why-01.png and /dev/null differ
diff --git a/images/intro-why-02.png b/images/intro-why-02.png
deleted file mode 100644
index a5bb4ce..0000000
Binary files a/images/intro-why-02.png and /dev/null differ
diff --git a/images/intro-why-03.png b/images/intro-why-03.png
deleted file mode 100644
index 1f5e536..0000000
Binary files a/images/intro-why-03.png and /dev/null differ
diff --git a/images/logo.png b/images/logo.png
deleted file mode 100644
index 553f31c..0000000
Binary files a/images/logo.png and /dev/null differ
diff --git a/images/purge-checkpoint-docs.png b/images/purge-checkpoint-docs.png
deleted file mode 100644
index 0480aa3..0000000
Binary files a/images/purge-checkpoint-docs.png and /dev/null differ
diff --git a/images/replication-state-diagram.svg b/images/replication-state-diagram.svg
deleted file mode 100644
index c1dc1f7..0000000
--- a/images/replication-state-diagram.svg
+++ /dev/null
@@ -1,419 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
-<svg version="1.2" width="215.9mm" height="279.4mm" viewBox="0 0 21590 27940" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg" xmlns:ooo="http://xml.openoffice.org/svg/export" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:presentation="http://sun.com/xmlns/staroffice/presentation" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:anim="urn:oasis:names:tc:opendocument:xmlns:animation:1.0" xml:space="preserve">
- <defs class="ClipPathGroup">
-  <clipPath id="presentation_clip_path" clipPathUnits="userSpaceOnUse">
-   <rect x="0" y="0" width="21590" height="27940"/>
-  </clipPath>
-  <clipPath id="presentation_clip_path_shrink" clipPathUnits="userSpaceOnUse">
-   <rect x="21" y="27" width="21547" height="27885"/>
-  </clipPath>
- </defs>
- <defs>
-  <font id="EmbeddedFont_1" horiz-adv-x="2048">
-   <font-face font-family="Liberation Sans embedded" units-per-em="2048" font-weight="normal" font-style="normal" ascent="1839" descent="421"/>
-   <missing-glyph horiz-adv-x="2048" d="M 0,0 L 2047,0 2047,2047 0,2047 0,0 Z"/>
-   <glyph unicode="z" horiz-adv-x="842" d="M 83,0 L 83,137 688,943 117,943 117,1082 901,1082 901,945 295,139 922,139 922,0 Z"/>
-   <glyph unicode="y" horiz-adv-x="1014" d="M 191,-425 C 142,-425 100,-421 67,-414 L 67,-279 C 92,-283 120,-285 151,-285 263,-285 352,-203 417,-38 L 434,5 5,1082 197,1082 425,484 C 428,475 432,464 437,451 442,438 457,394 482,320 507,246 521,205 523,196 L 593,393 830,1082 1020,1082 604,0 C 559,-115 518,-201 479,-258 440,-314 398,-356 351,-384 304,-411 250,-425 191,-425 Z"/>
-   <glyph unicode="x" horiz-adv-x="982" d="M 801,0 L 510,444 217,0 23,0 408,556 41,1082 240,1082 510,661 778,1082 979,1082 612,558 1002,0 Z"/>
-   <glyph unicode="w" horiz-adv-x="1481" d="M 1174,0 L 965,0 776,765 740,934 C 734,904 725,861 712,805 699,748 631,480 508,0 L 300,0 -3,1082 175,1082 358,347 C 363,331 377,265 401,149 L 418,223 644,1082 837,1082 1026,339 1072,149 1103,288 1308,1082 1484,1082 Z"/>
-   <glyph unicode="u" horiz-adv-x="858" d="M 314,1082 L 314,396 C 314,325 321,269 335,230 349,191 371,162 402,145 433,128 478,119 537,119 624,119 692,149 742,208 792,267 817,350 817,455 L 817,1082 997,1082 997,231 C 997,105 999,28 1003,0 L 833,0 C 832,3 832,12 831,27 830,42 830,59 829,78 828,97 826,132 825,185 L 822,185 C 781,110 733,58 679,27 624,-5 557,-20 476,-20 357,-20 271,10 216,69 161,128 133,225 133,361 L 133,1082 Z"/>
-   <glyph unicode="t" horiz-adv-x="531" d="M 554,8 C 495,-8 434,-16 372,-16 228,-16 156,66 156,229 L 156,951 31,951 31,1082 163,1082 216,1324 336,1324 336,1082 536,1082 536,951 336,951 336,268 C 336,216 345,180 362,159 379,138 408,127 450,127 474,127 509,132 554,141 Z"/>
-   <glyph unicode="s" horiz-adv-x="890" d="M 950,299 C 950,197 912,118 835,63 758,8 650,-20 511,-20 376,-20 273,2 200,47 127,91 79,160 57,254 L 216,285 C 231,227 263,185 311,158 359,131 426,117 511,117 602,117 669,131 712,159 754,187 775,229 775,285 775,328 760,362 731,389 702,416 654,438 589,455 L 460,489 C 357,516 283,542 240,568 196,593 162,624 137,661 112,698 100,743 100,796 100,895 135,970 206,1022 276,1073 378,1099 513,1099 632,1099 727,1078 798,1036 868,994 912,927 931,834 L 769,8 [...]
-   <glyph unicode="r" horiz-adv-x="515" d="M 142,0 L 142,830 C 142,906 140,990 136,1082 L 306,1082 C 311,959 314,886 314,861 L 318,861 C 347,954 380,1017 417,1051 454,1085 507,1102 575,1102 599,1102 623,1099 648,1092 L 648,927 C 624,934 592,937 552,937 477,937 420,905 381,841 342,776 322,684 322,564 L 322,0 Z"/>
-   <glyph unicode="p" horiz-adv-x="936" d="M 1053,546 C 1053,169 920,-20 655,-20 488,-20 376,43 319,168 L 314,168 C 317,163 318,106 318,-2 L 318,-425 138,-425 138,861 C 138,972 136,1046 132,1082 L 306,1082 C 307,1079 308,1070 309,1054 310,1037 312,1012 314,978 315,944 316,921 316,908 L 320,908 C 352,975 394,1024 447,1055 500,1086 569,1101 655,1101 788,1101 888,1056 954,967 1020,878 1053,737 1053,546 Z M 864,542 C 864,693 844,800 803,865 762,930 698,962 609,962 538,962 482,947 442,917 401 [...]
-   <glyph unicode="o" horiz-adv-x="968" d="M 1053,542 C 1053,353 1011,212 928,119 845,26 724,-20 565,-20 407,-20 288,28 207,125 126,221 86,360 86,542 86,915 248,1102 571,1102 736,1102 858,1057 936,966 1014,875 1053,733 1053,542 Z M 864,542 C 864,691 842,800 798,868 753,935 679,969 574,969 469,969 393,935 346,866 299,797 275,689 275,542 275,399 298,292 345,221 391,149 464,113 563,113 671,113 748,148 795,217 841,286 864,395 864,542 Z"/>
-   <glyph unicode="n" horiz-adv-x="874" d="M 825,0 L 825,686 C 825,757 818,813 804,852 790,891 768,920 737,937 706,954 661,963 602,963 515,963 447,933 397,874 347,815 322,732 322,627 L 322,0 142,0 142,851 C 142,977 140,1054 136,1082 L 306,1082 C 307,1079 307,1070 308,1055 309,1040 310,1024 311,1005 312,986 313,950 314,897 L 317,897 C 358,972 406,1025 461,1056 515,1087 582,1102 663,1102 782,1102 869,1073 924,1014 979,955 1006,857 1006,721 L 1006,0 Z"/>
-   <glyph unicode="m" horiz-adv-x="1435" d="M 768,0 L 768,686 C 768,791 754,863 725,903 696,943 645,963 570,963 493,963 433,934 388,875 343,816 321,734 321,627 L 321,0 142,0 142,851 C 142,977 140,1054 136,1082 L 306,1082 C 307,1079 307,1070 308,1055 309,1040 310,1024 311,1005 312,986 313,950 314,897 L 317,897 C 356,974 400,1027 450,1057 500,1087 561,1102 633,1102 715,1102 780,1086 828,1053 875,1020 908,968 927,897 L 930,897 C 967,970 1013,1022 1066,1054 1119,1086 1183,1102 1258,1102 1367 [...]
-   <glyph unicode="l" horiz-adv-x="173" d="M 138,0 L 138,1484 318,1484 318,0 Z"/>
-   <glyph unicode="j" horiz-adv-x="360" d="M 137,1312 L 137,1484 317,1484 317,1312 Z M 317,-134 C 317,-236 297,-310 257,-356 217,-402 157,-425 77,-425 26,-425 -17,-422 -50,-416 L -50,-277 12,-283 C 58,-283 90,-271 109,-247 128,-223 137,-176 137,-107 L 137,1082 317,1082 Z"/>
-   <glyph unicode="i" horiz-adv-x="173" d="M 137,1312 L 137,1484 317,1484 317,1312 Z M 137,0 L 137,1082 317,1082 317,0 Z"/>
-   <glyph unicode="h" horiz-adv-x="874" d="M 317,897 C 356,968 402,1020 457,1053 511,1086 580,1102 663,1102 780,1102 867,1073 923,1015 978,956 1006,858 1006,721 L 1006,0 825,0 825,686 C 825,762 818,819 804,856 790,893 767,920 735,937 703,954 659,963 602,963 517,963 450,934 399,875 348,816 322,737 322,638 L 322,0 142,0 142,1484 322,1484 322,1098 C 322,1057 321,1015 319,972 316,929 315,904 314,897 Z"/>
-   <glyph unicode="g" horiz-adv-x="921" d="M 548,-425 C 430,-425 336,-402 266,-356 196,-309 151,-243 131,-158 L 312,-132 C 324,-182 351,-221 392,-248 433,-275 486,-288 553,-288 732,-288 822,-183 822,27 L 822,201 820,201 C 786,132 739,80 680,45 621,10 551,-8 472,-8 339,-8 242,36 180,124 117,212 86,350 86,539 86,730 120,872 187,963 254,1054 355,1099 492,1099 569,1099 635,1082 692,1047 748,1012 791,962 822,897 L 824,897 C 824,917 825,952 828,1001 831,1050 833,1077 836,1082 L 1007,1082 C 100 [...]
-   <glyph unicode="f" horiz-adv-x="547" d="M 361,951 L 361,0 181,0 181,951 29,951 29,1082 181,1082 181,1204 C 181,1303 203,1374 246,1417 289,1460 356,1482 445,1482 495,1482 537,1478 572,1470 L 572,1333 C 542,1338 515,1341 492,1341 446,1341 413,1329 392,1306 371,1283 361,1240 361,1179 L 361,1082 572,1082 572,951 Z"/>
-   <glyph unicode="e" horiz-adv-x="952" d="M 276,503 C 276,379 302,283 353,216 404,149 479,115 578,115 656,115 719,131 766,162 813,193 844,233 861,281 L 1019,236 C 954,65 807,-20 578,-20 418,-20 296,28 213,123 129,218 87,360 87,548 87,727 129,864 213,959 296,1054 416,1102 571,1102 889,1102 1048,910 1048,527 L 1048,503 Z M 862,641 C 852,755 823,838 775,891 727,943 658,969 568,969 481,969 412,940 361,882 310,823 282,743 278,641 Z"/>
-   <glyph unicode="d" horiz-adv-x="921" d="M 821,174 C 788,105 744,55 689,25 634,-5 565,-20 484,-20 347,-20 247,26 183,118 118,210 86,349 86,536 86,913 219,1102 484,1102 566,1102 634,1087 689,1057 744,1027 788,979 821,914 L 823,914 821,1035 821,1484 1001,1484 1001,223 C 1001,110 1003,36 1007,0 L 835,0 C 833,11 831,35 829,74 826,113 825,146 825,174 Z M 275,542 C 275,391 295,282 335,217 375,152 440,119 530,119 632,119 706,154 752,225 798,296 821,405 821,554 821,697 798,802 752,869 706,936  [...]
-   <glyph unicode="c" horiz-adv-x="874" d="M 275,546 C 275,402 298,295 343,226 388,157 457,122 548,122 612,122 666,139 709,174 752,209 778,262 788,334 L 970,322 C 956,218 912,135 837,73 762,11 668,-20 553,-20 402,-20 286,28 207,124 127,219 87,359 87,542 87,724 127,863 207,959 287,1054 402,1102 551,1102 662,1102 754,1073 827,1016 900,959 945,880 964,779 L 779,765 C 770,825 746,873 708,908 670,943 616,961 546,961 451,961 382,929 339,866 296,803 275,696 275,546 Z"/>
-   <glyph unicode="b" horiz-adv-x="936" d="M 1053,546 C 1053,169 920,-20 655,-20 573,-20 505,-5 451,25 396,54 352,102 318,168 L 316,168 C 316,147 315,116 312,74 309,31 307,7 306,0 L 132,0 C 136,36 138,110 138,223 L 138,1484 318,1484 318,1061 C 318,1018 317,967 314,908 L 318,908 C 351,977 396,1027 451,1057 506,1087 574,1102 655,1102 792,1102 892,1056 957,964 1021,872 1053,733 1053,546 Z M 864,540 C 864,691 844,800 804,865 764,930 699,963 609,963 508,963 434,928 388,859 341,790 318,680 318 [...]
-   <glyph unicode="a" horiz-adv-x="1046" d="M 414,-20 C 305,-20 224,9 169,66 114,123 87,202 87,302 87,414 124,500 198,560 271,620 390,652 554,656 L 797,660 797,719 C 797,807 778,870 741,908 704,946 645,965 565,965 484,965 426,951 389,924 352,897 330,853 323,793 L 135,810 C 166,1005 310,1102 569,1102 705,1102 807,1071 876,1009 945,946 979,856 979,738 L 979,272 C 979,219 986,179 1000,152 1014,125 1041,111 1080,111 1097,111 1117,113 1139,118 L 1139,6 C 1094,-5 1047,-10 1000,-10 933,-10 885, [...]
-   <glyph unicode="_" horiz-adv-x="1201" d="M -31,-407 L -31,-277 1162,-277 1162,-407 Z"/>
-   <glyph unicode="U" horiz-adv-x="1170" d="M 731,-20 C 616,-20 515,1 429,43 343,85 276,146 229,226 182,306 158,401 158,512 L 158,1409 349,1409 349,528 C 349,399 382,302 447,235 512,168 607,135 730,135 857,135 955,170 1026,239 1096,308 1131,408 1131,541 L 1131,1409 1321,1409 1321,530 C 1321,416 1297,318 1249,235 1200,152 1132,89 1044,46 955,2 851,-20 731,-20 Z"/>
-   <glyph unicode="T" horiz-adv-x="1154" d="M 720,1253 L 720,0 530,0 530,1253 46,1253 46,1409 1204,1409 1204,1253 Z"/>
-   <glyph unicode="S" horiz-adv-x="1186" d="M 1272,389 C 1272,259 1221,158 1120,87 1018,16 875,-20 690,-20 347,-20 148,99 93,338 L 278,375 C 299,290 345,228 414,189 483,149 578,129 697,129 820,129 916,150 983,193 1050,235 1083,297 1083,379 1083,425 1073,462 1052,491 1031,520 1001,543 963,562 925,581 880,596 827,609 774,622 716,635 652,650 541,675 456,699 399,724 341,749 295,776 262,807 229,837 203,872 186,913 168,954 159,1000 159,1053 159,1174 205,1267 298,1332 390,1397 522,1430 694,1430 [...]
-   <glyph unicode="R" horiz-adv-x="1217" d="M 1164,0 L 798,585 359,585 359,0 168,0 168,1409 831,1409 C 990,1409 1112,1374 1199,1303 1285,1232 1328,1133 1328,1006 1328,901 1298,813 1237,742 1176,671 1091,626 984,607 L 1384,0 Z M 1136,1004 C 1136,1086 1108,1149 1053,1192 997,1235 917,1256 812,1256 L 359,1256 359,736 820,736 C 921,736 999,760 1054,807 1109,854 1136,919 1136,1004 Z"/>
-   <glyph unicode="P" horiz-adv-x="1092" d="M 1258,985 C 1258,852 1215,746 1128,667 1041,588 922,549 773,549 L 359,549 359,0 168,0 168,1409 761,1409 C 919,1409 1041,1372 1128,1298 1215,1224 1258,1120 1258,985 Z M 1066,983 C 1066,1165 957,1256 738,1256 L 359,1256 359,700 746,700 C 959,700 1066,794 1066,983 Z"/>
-   <glyph unicode="O" horiz-adv-x="1404" d="M 1495,711 C 1495,564 1467,435 1411,324 1354,213 1273,128 1168,69 1063,10 938,-20 795,-20 650,-20 526,9 421,68 316,127 235,212 180,323 125,434 97,563 97,711 97,936 159,1113 282,1240 405,1367 577,1430 797,1430 940,1430 1065,1402 1170,1345 1275,1288 1356,1205 1412,1096 1467,987 1495,859 1495,711 Z M 1300,711 C 1300,886 1256,1024 1169,1124 1081,1224 957,1274 797,1274 636,1274 511,1225 423,1126 335,1027 291,889 291,711 291,534 336,394 425,291 514,1 [...]
-   <glyph unicode="N" horiz-adv-x="1139" d="M 1082,0 L 328,1200 333,1103 338,936 338,0 168,0 168,1409 390,1409 1152,201 C 1144,332 1140,426 1140,485 L 1140,1409 1312,1409 1312,0 Z"/>
-   <glyph unicode="I" horiz-adv-x="188" d="M 189,0 L 189,1409 380,1409 380,0 Z"/>
-   <glyph unicode="H" horiz-adv-x="1139" d="M 1121,0 L 1121,653 359,653 359,0 168,0 168,1409 359,1409 359,813 1121,813 1121,1409 1312,1409 1312,0 Z"/>
-   <glyph unicode="F" horiz-adv-x="999" d="M 359,1253 L 359,729 1145,729 1145,571 359,571 359,0 168,0 168,1409 1169,1409 1169,1253 Z"/>
-   <glyph unicode="E" horiz-adv-x="1108" d="M 168,0 L 168,1409 1237,1409 1237,1253 359,1253 359,801 1177,801 1177,647 359,647 359,156 1278,156 1278,0 Z"/>
-   <glyph unicode="C" horiz-adv-x="1294" d="M 792,1274 C 636,1274 515,1224 428,1124 341,1023 298,886 298,711 298,538 343,400 434,295 524,190 646,137 800,137 997,137 1146,235 1245,430 L 1401,352 C 1343,231 1262,138 1157,75 1052,12 930,-20 791,-20 649,-20 526,10 423,69 319,128 240,212 186,322 131,431 104,561 104,711 104,936 165,1112 286,1239 407,1366 575,1430 790,1430 940,1430 1065,1401 1166,1342 1267,1283 1341,1196 1388,1081 L 1207,1021 C 1174,1103 1122,1166 1050,1209 977,1252 891,1274 79 [...]
-   <glyph unicode="A" horiz-adv-x="1357" d="M 1167,0 L 1006,412 364,412 202,0 4,0 579,1409 796,1409 1362,0 Z M 685,1265 L 676,1237 C 659,1182 635,1111 602,1024 L 422,561 949,561 768,1026 C 749,1072 731,1124 712,1182 Z"/>
-   <glyph unicode="&gt;" horiz-adv-x="999" d="M 101,154 L 101,307 959,674 101,1040 101,1194 1096,776 1096,571 Z"/>
-   <glyph unicode="&lt;" horiz-adv-x="999" d="M 101,571 L 101,776 1096,1194 1096,1040 238,674 1096,307 1096,154 Z"/>
-   <glyph unicode="/" horiz-adv-x="578" d="M 0,-20 L 411,1484 569,1484 162,-20 Z"/>
-   <glyph unicode="-" horiz-adv-x="500" d="M 91,464 L 91,624 591,624 591,464 Z"/>
-   <glyph unicode=")" horiz-adv-x="546" d="M 555,528 C 555,335 525,162 465,9 404,-144 311,-289 186,-424 L 12,-424 C 137,-284 229,-137 287,19 345,174 374,344 374,530 374,716 345,887 287,1042 228,1197 137,1345 12,1484 L 186,1484 C 312,1348 405,1203 465,1050 525,896 555,723 555,532 Z"/>
-   <glyph unicode="(" horiz-adv-x="546" d="M 127,532 C 127,725 157,898 218,1051 278,1204 371,1349 496,1484 L 670,1484 C 545,1345 454,1198 396,1042 337,886 308,715 308,530 308,345 337,175 395,20 452,-135 544,-283 670,-424 L 496,-424 C 370,-288 277,-143 217,11 157,164 127,337 127,528 Z"/>
-   <glyph unicode=" " horiz-adv-x="561"/>
-  </font>
- </defs>
- <defs class="TextShapeIndex">
-  <g ooo:slide="id1" ooo:id-list="id3 id4 id5 id6 id7 id8 id9 id10 id11 id12 id13 id14 id15 id16 id17 id18 id19 id20 id21 id22 id23 id24 id25 id26 id27 id28 id29 id30 id31 id32 id33 id34 id35 id36 id37 id38 id39 id40"/>
- </defs>
- <defs class="EmbeddedBulletChars">
-  <g id="bullet-char-template(57356)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 580,1141 L 1163,571 580,0 -4,571 580,1141 Z"/>
-  </g>
-  <g id="bullet-char-template(57354)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 8,1128 L 1137,1128 1137,0 8,0 8,1128 Z"/>
-  </g>
-  <g id="bullet-char-template(10146)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 174,0 L 602,739 174,1481 1456,739 174,0 Z M 1358,739 L 309,1346 659,739 1358,739 Z"/>
-  </g>
-  <g id="bullet-char-template(10132)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 2015,739 L 1276,0 717,0 1260,543 174,543 174,936 1260,936 717,1481 1274,1481 2015,739 Z"/>
-  </g>
-  <g id="bullet-char-template(10007)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 0,-2 C -7,14 -16,27 -25,37 L 356,567 C 262,823 215,952 215,954 215,979 228,992 255,992 264,992 276,990 289,987 310,991 331,999 354,1012 L 381,999 492,748 772,1049 836,1024 860,1049 C 881,1039 901,1025 922,1006 886,937 835,863 770,784 769,783 710,716 594,584 L 774,223 C 774,196 753,168 711,139 L 727,119 C 717,90 699,76 672,76 641,76 570,178 457,381 L 164,-76 C 142,-110 111,-127 72,-127 30,-127 9,-110 8,-76 1,-67 -2,-52 -2,-32 -2,-23 -1,-13 0,-2 Z"/>
-  </g>
-  <g id="bullet-char-template(10004)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 285,-33 C 182,-33 111,30 74,156 52,228 41,333 41,471 41,549 55,616 82,672 116,743 169,778 240,778 293,778 328,747 346,684 L 369,508 C 377,444 397,411 428,410 L 1163,1116 C 1174,1127 1196,1133 1229,1133 1271,1133 1292,1118 1292,1087 L 1292,965 C 1292,929 1282,901 1262,881 L 442,47 C 390,-6 338,-33 285,-33 Z"/>
-  </g>
-  <g id="bullet-char-template(9679)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 813,0 C 632,0 489,54 383,161 276,268 223,411 223,592 223,773 276,916 383,1023 489,1130 632,1184 813,1184 992,1184 1136,1130 1245,1023 1353,916 1407,772 1407,592 1407,412 1353,268 1245,161 1136,54 992,0 813,0 Z"/>
-  </g>
-  <g id="bullet-char-template(8226)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 346,457 C 273,457 209,483 155,535 101,586 74,649 74,723 74,796 101,859 155,911 209,963 273,989 346,989 419,989 480,963 531,910 582,859 608,796 608,723 608,648 583,586 532,535 482,483 420,457 346,457 Z"/>
-  </g>
-  <g id="bullet-char-template(8211)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M -4,459 L 1135,459 1135,606 -4,606 -4,459 Z"/>
-  </g>
-  <g id="bullet-char-template(61548)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 173,740 C 173,903 231,1043 346,1159 462,1274 601,1332 765,1332 928,1332 1067,1274 1183,1159 1299,1043 1357,903 1357,740 1357,577 1299,437 1183,322 1067,206 928,148 765,148 601,148 462,206 346,322 231,437 173,577 173,740 Z"/>
-  </g>
- </defs>
- <defs class="TextEmbeddedBitmaps"/>
- <g>
-  <g id="id2" class="Master_Slide">
-   <g id="bg-id2" class="Background"/>
-   <g id="bo-id2" class="BackgroundObjects"/>
-  </g>
- </g>
- <g class="SlideGroup">
-  <g>
-   <g id="container-id1">
-    <g id="id1" class="Slide" clip-path="url(#presentation_clip_path)">
-     <g class="Page">
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id3">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4554" y="7857" width="2925" height="1274"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 6015,7858 C 6843,7858 7476,8133 7476,8493 7476,8853 6843,9128 6015,9128 5187,9128 4555,8853 4555,8493 4555,8133 5187,7858 6015,7858 Z M 4555,7858 L 4555,7858 Z M 7477,9129 L 7477,9129 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 6015,7858 C 6843,7858 7476,8133 7476,8493 7476,8853 6843,9128 6015,9128 5187,9128 4555,8853 4555,8493 4555,8133 5187,7858 6015,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 4555,7858 L 4555,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7477,9129 L 7477,9129 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="5311" y="8714"><tspan fill="rgb(0,0,0)" stroke="none">Error</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id4">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1887" y="15857" width="3433" height="1655"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 3602,15858 C 4574,15858 5317,16215 5317,16683 5317,17151 4574,17509 3602,17509 2630,17509 1888,17151 1888,16683 1888,16215 2630,15858 3602,15858 Z M 1888,15858 L 1888,15858 Z M 5318,17510 L 5318,17510 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 3602,15858 C 4574,15858 5317,16215 5317,16683 5317,17151 4574,17509 3602,17509 2630,17509 1888,17151 1888,16683 1888,16215 2630,15858 3602,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 1888,15858 L 1888,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 5318,17510 L 5318,17510 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="2442" y="16904"><tspan fill="rgb(0,0,0)" stroke="none">Pending</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id5">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8999" y="15857" width="3433" height="1655"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 10714,15858 C 11686,15858 12429,16215 12429,16683 12429,17151 11686,17509 10714,17509 9742,17509 9000,17151 9000,16683 9000,16215 9742,15858 10714,15858 Z M 9000,15858 L 9000,15858 Z M 12430,17510 L 12430,17510 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 10714,15858 C 11686,15858 12429,16215 12429,16683 12429,17151 11686,17509 10714,17509 9742,17509 9000,17151 9000,16683 9000,16215 9742,15858 10714,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 9000,15858 L 9000,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 12430,17510 L 12430,17510 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9537" y="16904"><tspan fill="rgb(0,0,0)" stroke="none">Running</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id6">
-        <rect class="BoundingBox" stroke="none" fill="none" x="15984" y="15985" width="3433" height="1528"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 17699,15986 C 18671,15986 19414,16315 19414,16747 19414,17179 18671,17509 17699,17509 16727,17509 15985,17179 15985,16747 15985,16315 16727,15986 17699,15986 Z M 15985,15986 L 15985,15986 Z M 19415,17511 L 19415,17511 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 17699,15986 C 18671,15986 19414,16315 19414,16747 19414,17179 18671,17509 17699,17509 16727,17509 15985,17179 15985,16747 15985,16315 16727,15986 17699,15986 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 15985,15986 L 15985,15986 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 19415,17511 L 19415,17511 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="16433" y="16969"><tspan fill="rgb(0,0,0)" stroke="none">Crashing</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id7">
-        <rect class="BoundingBox" stroke="none" fill="none" x="13318" y="7857" width="2797" height="1273"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 14716,9128 L 13319,9128 13319,7858 16113,7858 16113,9128 14716,9128 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 14716,9128 L 13319,9128 13319,7858 16113,7858 16113,9128 14716,9128 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="13855" y="8714"><tspan fill="rgb(0,0,0)" stroke="none">Failed</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id8">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8872" y="21191" width="3686" height="1654"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 10715,22843 L 8873,22843 8873,21192 12556,21192 12556,22843 10715,22843 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 10715,22843 L 8873,22843 8873,21192 12556,21192 12556,22843 10715,22843 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9184" y="22238"><tspan fill="rgb(0,0,0)" stroke="none">Completed</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id9">
-        <rect class="BoundingBox" stroke="none" fill="none" x="7603" y="5189" width="6100" height="639"/>
-        <path fill="rgb(255,255,255)" fill-opacity="0.988" stroke="rgb(255,255,255)" stroke-opacity="0.988" d="M 8831,5190 L 12473,5190 13701,5508 12473,5826 8831,5826 7604,5508 8831,5190 8831,5190 Z M 7604,5190 L 7604,5190 Z M 13701,5826 L 13701,5826 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 8831,5190 L 12473,5190 13701,5508 12473,5826 8831,5826 7604,5508 8831,5190 8831,5190 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7604,5190 L 7604,5190 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 13701,5826 L 13701,5826 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="8613" y="5654"><tspan fill="rgb(0,0,0)" stroke="none">Create job from documen</tspan><tspan font-size="423px" fill="rgb(0,0,0)" stroke="none">t</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id10">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8110" y="12556" width="5211" height="512"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 9159,12557 L 12270,12557 13319,12811 12270,13066 9159,13066 8111,12811 9159,12557 9159,12557 Z M 8111,12557 L 8111,12557 Z M 13319,13066 L 13319,13066 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 9159,12557 L 12270,12557 13319,12811 12270,13066 9159,13066 8111,12811 9159,12557 9159,12557 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 8111,12557 L 8111,12557 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 13319,13066 L 13319,13066 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="9332" y="12935"><tspan fill="rgb(0,0,0)" stroke="none">Schedule new job</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id11">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10502" y="2396" width="301" height="764"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,2397 L 10652,2729"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10652,3159 L 10802,2709 10502,2709 10652,3159 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id12">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10651" y="5825" width="4075" height="2034"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,5826 C 10652,7350 13936,6513 14600,7469"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 14716,7858 L 14724,7384 14438,7474 14716,7858 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id13">
-        <rect class="BoundingBox" stroke="none" fill="none" x="6017" y="5825" width="4637" height="2034"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,5826 C 10652,7350 6890,6510 6145,7475"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 6017,7858 L 6307,7483 6024,7384 6017,7858 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id14">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10655" y="9127" width="8135" height="3431"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 18788,9128 C 18788,11700 11591,10164 10788,12166"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10716,12557 L 10949,12144 10655,12087 10716,12557 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id15">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10565" y="5825" width="302" height="6733"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,5826 C 10652,10875 10712,7727 10716,12076"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10716,12557 L 10866,12107 10566,12107 10716,12557 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id16">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10566" y="13064" width="301" height="2796"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10716,13065 L 10716,15429"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10716,15859 L 10866,15409 10566,15409 10716,15859 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id17">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10565" y="17509" width="302" height="3685"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10716,17510 C 10716,20271 10715,18639 10715,20734"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10715,21193 L 10865,20743 10565,20743 10715,21193 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id18">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4816" y="15497" width="4689" height="605"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 9503,16100 C 9503,15343 6051,15358 5072,15824"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 4816,16100 L 5236,15880 5020,15672 4816,16100 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id19">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4815" y="17268" width="4689" height="670"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 4816,17269 C 4816,18109 8314,18093 9266,17565"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 9503,17269 L 9100,17519 9331,17711 9503,17269 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id20">
-        <rect class="BoundingBox" stroke="none" fill="none" x="11927" y="15486" width="4562" height="724"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 11928,16100 C 11928,15337 15364,15309 16271,15893"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 16488,16209 L 16348,15756 16104,15930 16488,16209 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id21">
-        <rect class="BoundingBox" stroke="none" fill="none" x="11928" y="17269" width="4562" height="571"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 16488,17287 C 16488,17978 13165,17970 12192,17538"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 11928,17269 L 12145,17691 12355,17477 11928,17269 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id22">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8492" y="3158" width="4322" height="1274"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 10652,3159 C 11876,3159 12811,3434 12811,3794 12811,4154 11876,4429 10652,4429 9428,4429 8493,4154 8493,3794 8493,3434 9428,3159 10652,3159 Z M 8493,3159 L 8493,3159 Z M 12812,4430 L 12812,4430 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 10652,3159 C 11876,3159 12811,3434 12811,3794 12811,4154 11876,4429 10652,4429 9428,4429 8493,4154 8493,3794 8493,3434 9428,3159 10652,3159 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 8493,3159 L 8493,3159 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 12812,4430 L 12812,4430 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9263" y="4015"><tspan fill="rgb(0,0,0)" stroke="none">Initializing</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id23">
-        <rect class="BoundingBox" stroke="none" fill="none" x="3573" y="13064" width="7145" height="2796"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10716,13065 C 10716,15160 4548,13939 3700,15474"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 3604,15859 L 3864,15462 3574,15386 3604,15859 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id24">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10502" y="4428" width="301" height="764"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,4429 L 10652,4761"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10652,5191 L 10802,4741 10502,4741 10652,5191 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id25">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="24871" width="638" height="352"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 1316,24872 L 1634,24872 1634,25221 999,25221 999,24872 1316,24872 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1316,24872 L 1634,24872 1634,25221 999,25221 999,24872 1316,24872 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id26">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="25382" width="639" height="385"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 1316,25383 C 1496,25383 1634,25465 1634,25573 1634,25681 1496,25764 1316,25764 1136,25764 999,25681 999,25573 999,25465 1136,25383 1316,25383 Z M 999,25383 L 999,25383 Z M 1635,25765 L 1635,25765 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1316,25383 C 1496,25383 1634,25465 1634,25573 1634,25681 1496,25764 1316,25764 1136,25764 999,25681 999,25573 999,25465 1136,25383 1316,25383 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,25383 L 999,25383 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1635,25765 L 1635,25765 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id27">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="24239" width="638" height="384"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 1317,24621 L 999,24621 999,24240 1634,24240 1634,24621 1317,24621 Z"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 1317,24621 L 999,24621 999,24240 1634,24240 1634,24621 1317,24621 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id28">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="23731" width="638" height="384"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 1317,24113 L 999,24113 999,23732 1634,23732 1634,24113 1317,24113 Z"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 1317,24113 L 999,24113 999,23732 1634,23732 1634,24113 1317,24113 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TableShape">
-       <g>
-        <rect class="BoundingBox" stroke="none" fill="none" x="22055" y="25834" width="4448" height="619"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 22056,25835 L 26501,25835 26501,26451 22056,26451 22056,25835 Z"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 22056,25835 L 22056,26451"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 22056,26451 L 26501,26451"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 26501,25835 L 26501,26451"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 22056,25835 L 26501,25835"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id29">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1635" y="24749" width="2510" height="751"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1885" y="25128"><tspan fill="rgb(0,0,0)" stroke="none">Terminal state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id30">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1683" y="25257" width="4148" height="608"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1933" y="25666"><tspan fill="rgb(0,0,0)" stroke="none">Non-terminal (retryin</tspan><tspan font-size="318px" fill="rgb(0,0,0)" stroke="none">g) state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id31">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1635" y="23606" width="2541" height="636"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1885" y="23985"><tspan fill="rgb(0,0,0)" stroke="none">Healthy state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id32">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1635" y="24185" width="2476" height="565"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1885" y="24564"><tspan fill="rgb(0,0,0)" stroke="none">Unhealthy state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id33">
-        <rect class="BoundingBox" stroke="none" fill="none" x="6016" y="9127" width="4794" height="3431"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 6017,9128 C 6017,11700 10163,10178 10665,12135"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10715,12557 L 10809,12092 10511,12129 10715,12557 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id34">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4053" y="7390" width="994" height="2170"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 4983,8942 C 4983,9974 4054,9631 4054,8494 4054,7358 4686,7159 4907,7626"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 4983,8044 L 5045,7574 4751,7631 4983,8044 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id35">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="25890" width="639" height="385"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 1127,25891 L 1506,25891 1635,26082 1506,26273 1127,26273 999,26082 1127,25891 1127,25891 Z M 999,25891 L 999,25891 Z M 1635,26273 L 1635,26273 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1127,25891 L 1506,25891 1635,26082 1506,26273 1127,26273 999,26082 1127,25891 1127,25891 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,25891 L 999,25891 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1635,26273 L 1635,26273 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id36">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1649" y="25836" width="4178" height="565"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1899" y="26215"><tspan fill="rgb(0,0,0)" stroke="none">Internal API (not a state)</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id37">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1683" y="26273" width="3665" height="565"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1933" y="26652"><tspan fill="rgb(0,0,0)" stroke="none">External API (not a state)</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id38">
-        <rect class="BoundingBox" stroke="none" fill="none" x="17001" y="7857" width="3576" height="1274"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 17002,7858 L 20575,7858 19940,9129 17636,9129 17002,7858 Z M 17002,7858 L 17002,7858 Z M 20575,9129 L 20575,9129 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 17002,7858 L 20575,7858 19940,9129 17636,9129 17002,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 17002,7858 L 17002,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 20575,9129 L 20575,9129 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="18119" y="8421"><tspan fill="rgb(0,0,0)" stroke="none">POST to</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="17980" y="8811"><tspan fill="rgb(0,0,0)" stroke="none">/_replicate</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id39">
-        <rect class="BoundingBox" stroke="none" fill="none" x="7984" y="1126" width="5338" height="1274"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 7985,1127 L 13320,1127 12392,2398 8913,2398 7985,1127 Z M 7985,1127 L 7985,1127 Z M 13320,2398 L 13320,2398 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7985,1127 L 13320,1127 12392,2398 8913,2398 7985,1127 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7985,1127 L 7985,1127 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 13320,2398 L 13320,2398 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="9298" y="1885"><tspan fill="rgb(0,0,0)" stroke="none">_replicator/&lt;doc&gt;</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id40">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="26494" width="639" height="290"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 999,26495 L 1635,26495 1476,26782 1158,26782 999,26495 Z M 999,26495 L 999,26495 Z M 1635,26782 L 1635,26782 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,26495 L 1635,26495 1476,26782 1158,26782 999,26495 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,26495 L 999,26495 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1635,26782 L 1635,26782 Z"/>
-       </g>
-      </g>
-     </g>
-    </g>
-   </g>
-  </g>
- </g>
-</svg>
\ No newline at end of file
diff --git a/images/rev-tree1.png b/images/rev-tree1.png
deleted file mode 100644
index 467f69e..0000000
Binary files a/images/rev-tree1.png and /dev/null differ
diff --git a/images/rev-tree2.png b/images/rev-tree2.png
deleted file mode 100644
index e77ca3b..0000000
Binary files a/images/rev-tree2.png and /dev/null differ
diff --git a/images/rev-tree3.png b/images/rev-tree3.png
deleted file mode 100644
index fa97c7d..0000000
Binary files a/images/rev-tree3.png and /dev/null differ
diff --git a/images/views-intro-01.png b/images/views-intro-01.png
deleted file mode 100644
index b102d5e..0000000
Binary files a/images/views-intro-01.png and /dev/null differ
diff --git a/images/views-intro-02.png b/images/views-intro-02.png
deleted file mode 100644
index 4e9f3dc..0000000
Binary files a/images/views-intro-02.png and /dev/null differ
diff --git a/images/views-intro-03.png b/images/views-intro-03.png
deleted file mode 100644
index 83929ee..0000000
Binary files a/images/views-intro-03.png and /dev/null differ
diff --git a/images/views-intro-04.png b/images/views-intro-04.png
deleted file mode 100644
index 51e3de8..0000000
Binary files a/images/views-intro-04.png and /dev/null differ
diff --git a/make.bat b/make.bat
deleted file mode 100644
index 77f6d98..0000000
--- a/make.bat
+++ /dev/null
@@ -1,253 +0,0 @@
-@ECHO OFF
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set BUILDDIR=build
-set SOURCE=src/
-set PAPERSIZE=-D latex_elements.papersize=a4
-set SPHINXFLAGS=-a -n -A local=1 %PAPERSIZE%
-set SPHINXOPTS=%SPHINXFLAGS% %SOURCE%
-set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS%
-set I18NSPHINXOPTS=%SPHINXOPTS%
-if NOT "%PAPER%" == "" (
-	set ALLSPHINXOPTS=-D latex_elements.papersize=%PAPER% %ALLSPHINXOPTS%
-	set I18NSPHINXOPTS=-D latex_elements.papersize=%PAPER% %I18NSPHINXOPTS%
-)
-
-if "%1" == "" goto help
-
-if "%1" == "help" (
-	:help
-	echo.Please use `make ^<target^>` where ^<target^> is one of
-	echo.  html       to make standalone HTML files
-	echo.  dirhtml    to make HTML files named index.html in directories
-	echo.  singlehtml to make a single large HTML file
-	echo.  pickle     to make pickle files
-	echo.  json       to make JSON files
-	echo.  htmlhelp   to make HTML files and a HTML help project
-	echo.  qthelp     to make HTML files and a qthelp project
-	echo.  devhelp    to make HTML files and a Devhelp project
-	echo.  epub       to make an epub
-	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter
-	echo.  text       to make text files
-	echo.  man        to make manual pages
-	echo.  texinfo    to make Texinfo files
-	echo.  gettext    to make PO message catalogs
-	echo.  changes    to make an overview over all changed/added/deprecated items
-	echo.  xml        to make Docutils-native XML files
-	echo.  pseudoxml  to make pseudoxml-XML files for display purposes
-	echo.  linkcheck  to check all external links for integrity
-	echo.  doctest    to run all doctests embedded in the documentation if enabled
-	echo.  check      to run the Python based linter
-	goto end
-)
-
-if "%1" == "clean" (
-	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
-	del /q /s %BUILDDIR%\*
-	goto end
-)
-
-
-%SPHINXBUILD% 1> nul 2> nul
-if errorlevel 9009 (
-	echo.
-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
-	echo.installed, then set the SPHINXBUILD environment variable to point
-	echo.to the full path of the 'sphinx-build' executable. Alternatively you
-	echo.may add the Sphinx directory to PATH.
-	echo.
-	echo.If you don't have Sphinx installed, grab it from
-	echo.http://sphinx-doc.org/
-	exit /b 1
-)
-
-if "%1" == "html" (
-	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/html.
-	goto end
-)
-
-if "%1" == "dirhtml" (
-	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
-	goto end
-)
-
-if "%1" == "singlehtml" (
-	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
-	goto end
-)
-
-if "%1" == "pickle" (
-	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the pickle files.
-	goto end
-)
-
-if "%1" == "json" (
-	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can process the JSON files.
-	goto end
-)
-
-if "%1" == "htmlhelp" (
-	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run HTML Help Workshop with the ^
-.hhp project file in %BUILDDIR%/htmlhelp.
-	goto end
-)
-
-if "%1" == "qthelp" (
-	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; now you can run "qcollectiongenerator" with the ^
-.qhcp project file in %BUILDDIR%/qthelp, like this:
-	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Couch.qhcp
-	echo.To view the help file:
-	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Couch.ghc
-	goto end
-)
-
-if "%1" == "devhelp" (
-	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished.
-	goto end
-)
-
-if "%1" == "epub" (
-	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The epub file is in %BUILDDIR%/epub.
-	goto end
-)
-
-if "%1" == "latex" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "latexpdf" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	cd %BUILDDIR%/latex
-	make all-pdf
-	cd %BUILDDIR%/..
-	echo.
-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "latexpdfja" (
-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
-	cd %BUILDDIR%/latex
-	make all-pdf-ja
-	cd %BUILDDIR%/..
-	echo.
-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.
-	goto end
-)
-
-if "%1" == "text" (
-	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The text files are in %BUILDDIR%/text.
-	goto end
-)
-
-if "%1" == "man" (
-	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The manual pages are in %BUILDDIR%/man.
-	goto end
-)
-
-if "%1" == "texinfo" (
-	%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
-	goto end
-)
-
-if "%1" == "gettext" (
-	%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
-	goto end
-)
-
-if "%1" == "changes" (
-	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.The overview file is in %BUILDDIR%/changes.
-	goto end
-)
-
-if "%1" == "linkcheck" (
-	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Link check complete; look for any errors in the above output ^
-or in %BUILDDIR%/linkcheck/output.txt.
-	goto end
-)
-
-if "%1" == "doctest" (
-	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Testing of doctests in the sources finished, look at the ^
-results in %BUILDDIR%/doctest/output.txt.
-	goto end
-)
-
-if "%1" == "xml" (
-	%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The XML files are in %BUILDDIR%/xml.
-	goto end
-)
-
-if "%1" == "pseudoxml" (
-	%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
-	if errorlevel 1 exit /b 1
-	echo.
-	echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
-	goto end
-)
-
-if "%1" == "check" (
-    python ext\linter.py %SOURCE%
-	if errorlevel 1 exit /b 1
-    goto end
-)
-
-:end
diff --git a/rebar.config b/rebar.config
deleted file mode 100644
index d05b1d5..0000000
--- a/rebar.config
+++ /dev/null
@@ -1,16 +0,0 @@
-% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*-
-% ex: ts=4 sw=4 ft=erlang et
-% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-% use this file except in compliance with the License. You may obtain a copy of
-% the License at
-%
-%   http://www.apache.org/licenses/LICENSE-2.0
-%
-% Unless required by applicable law or agreed to in writing, software
-% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-% License for the specific language governing permissions and limitations under
-% the License.
-
-{pre_hooks,  [ {compile, "make"}]}.
-{post_hooks, [ {clean,   "make clean"}]}.
diff --git a/requirements.txt b/requirements.txt
deleted file mode 100644
index e5d77a9..0000000
--- a/requirements.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-Sphinx==1.7.4
-sphinx-rtd-theme==0.4.0
diff --git a/rfcs/001-fdb-revision-metadata-model.md b/rfcs/001-fdb-revision-metadata-model.md
deleted file mode 100644
index b9e4071..0000000
--- a/rfcs/001-fdb-revision-metadata-model.md
+++ /dev/null
@@ -1,215 +0,0 @@
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-This is a proposal for the storage of document revision history metadata as a
-set of KVs in FoundationDB.
-
-## Abstract
-
-This design stores each edit branch as its own KV, and all of the edit branches
-are stored separately from the actual document data. Document reads can avoid
-retrieving this information, while writes can avoid retrieving the document
-body.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their
-definitions here.)
-
-`Versionstamp`: a 12 byte, unique, monotonically (but not sequentially)
-increasing value for each committed transaction. The first 8 bytes are the
-committed version of the database. The next 2 bytes are monotonic in the
-serialization order for transactions. The final 2 bytes are user-defined and can
-be used to create multiple versionstamps in a single transaction.
-
-`Incarnation`: a single byte, monotonically increasing value specified for each
-CouchDB database. The `Incarnation` starts at `\x00` when a database is created
-and is incremented by one whenever a database is relocated to a different
-FoundationDB cluster.
-
-`Sequence`: a 13 byte value formed by combining the current `Incarnation` of
-the database and the `Versionstamp` of the transaction. Sequences are
-monotonically increasing even when a database is relocated across FoundationDB
-clusters.
-
----
-
-# Detailed Description
-
-The size limits in FoundationDB preclude storing the entire revision tree as a
-single value; in pathological situations the tree could exceed 100KB. Rather, we
-propose to store each edit *branch* as a separate KV. We have two different
-value formats, one that is used for the "winning" edit branch and one used for
-any additional edit branches of the document. The winning edit branch includes
-the following information:
-
-`(“revisions”, DocID, NotDeleted, RevPosition, RevHash) = (RevFormat, Sequence,
-BranchCount, [ParentRev, GrandparentRev, …])`
-
-while the other edit branches omit the `Sequence` and `BranchCount`:
-
-`(“revisions”, DocID, NotDeleted, RevPosition, RevHash) = (RevFormat,
-[ParentRev, GrandparentRev, …])`
-
-The individual elements of the key and value are defined as follows:
-- `DocID`: the document ID
-- `NotDeleted`: `\x26` if the leaf of the edit branch is deleted, `\x27`
-  otherwise (following tuple encoding for booleans)
-- `RevPosition`: positive integer encoded using standard tuple layer encoding
-  (signed, variable-length, order-preserving)
-- `RevHash`: 16 bytes uniquely identifying this revision
-- `RevFormat`: enum for the revision encoding being used to enable schema
-  evolution
-- `Sequence`: the sequence of the last transaction that modified the document
-  (NB: not necessarily the last edit to *this* branch).
-- `BranchCount`: the number of edit branches associated with this document.
-- `[ParentRev, GrandparentRev, ...]`: 16 byte identifiers of ancestors, up to
-  1000 by default
-
-## Limits
-
-In order to stay compatible with FoundationDB size limits we need to prevent
-administrators from increasing `_revs_limit` beyond what we can fit into a
-single value. Suggest **4000** as a max.
-
-## Update Path
-
-Each edit on a document will read and modify the so-called "winning" edit
-branch, a property that is essential for FoundationDB to correctly identify
-concurrent modifications to a given document as conflicting. We enforce this
-specifically by storing the `Sequence` only on the winning branch. Other
-branches set this to null.
-
-If a writer comes in and tries to extend a losing edit branch, it will find the
-first element of the value to be null and will do an additional edit branch read
-to retrieve the winning branch. It can then compare both branches to see which
-one will be the winner following that edit, and can assign the extra metadata to
-that branch accordingly.
-
-A writer attempting to delete the winning branch (i.e., setting `NotDeleted` to
-`\x26`) will need to read two contiguous KVs, the one for the winner and the one
-right before it. If the branch before it will be the winner following the
-deletion then we move the storage of the extra metadata to it accordingly. If
-the tombstoned branch remains the winner for this document then we only update
-that branch.
-
-A writer extending the winning branch with an updated document (the common case)
-will proceed reading just the one branch.
-
-A writer attempting to insert a new document without any base revision will need
-to execute a `get_range_startswith` operation with `limit=1` and `reverse=true`
-on the key range prefixed by ("revisions", DocID). A null result from that range
-read would be the signal to go ahead with the write. If another transaction
-races our writer and inserts the document first FoundationDB will detect the
-intersection between the write set of that transaction and the read range here
-and correctly cause our writer to fail.
-
-New edit branches can only be created on that first edit to a document or during
-`new_edits=false`, so most interactive writers will just carry over the
-`BranchCount` with each edit they make. A writer with `new_edits=false` will
-retrieve the full range of KV pairs and set the `BranchCount` accordingly.
-Tracking the `BranchCount` here enables us to push that information into the
-`_changes` feed index, where it can be used to optimize the popular
-`style=all_docs` queries in the common case of a single edit branch per
-document.
-
-Summarizing the performance profile:
-- Extending a losing branch: 2 KVs, 2 roundtrips
-- Deleting the winning branch: 2 KVs, 1 roundtrip
-- Extending the winning branch: 1 KV, 1 roundtrip
-- `new_edits=false` update: `<N>` KVs, 1 roundtrip
-
-# Advantages
-
-We can read a document revision without retrieving the revision tree, which in
-the case of frequently-edited documents may be larger than the doc itself.
-
-We ensure that an interactive document update against the winning branch only
-needs to read the edit branch KV against which the update is being applied, and
-it can read that branch immediately knowing only the content of the edit that is
-being attempted (i.e., it does not need to read the current version of the
-document itself). The less common scenario of updating a losing branch is only
-slightly less efficient, requiring two roundtrips.
-
-Interactively updating a document with a large number of edit branches is
-therefore dramatically cheaper, as no more than two edit branches are read or
-modified regardless of the number of branches that exist, and no tree merge
-logic is required.
-
-Including `NotDeleted` in the key ensures that we can efficiently accept the
-case where we upload a new document with the same ID where all previous edit
-branches have been deleted; i.e. we can construct a key selector which
-automatically tells us there are no `deleted=false` edit branches.
-
-The `RevFormat` enum gives us the ability to evolve revision history storage
-over time, and to support alternative conflict resolution policies like Last
-Writer Wins.
-
-Access to the indexed `Sequence` ensures we can clear the old entry in the
-`changes` subspace during an edit. The `set_versionstamped_value` API is used to
-store this value automatically.
-
-The key structure above naturally sorts so that the "winning" revision is the
-last one in the list, which we leverage when deleting the winning edit branch
-(and thus promoting the one next in line), and extending a conflict branch (to
-coordinate the update to the `Sequence`) This is also a small optimization for
-reads with `?revs=true` or `?revs_info=true`, where we want the details of the
-winning edit branch but don't actually know the `RevPosition` and `RevHash` of
-that branch.
-
-# Disadvantages
-
-Historical revision identifiers shared by multiple edit branches are duplicated.
-
-# Key Changes
-
-Administrators cannot set `_revs_limit` larger than 4,000 (previously
-unlimited?). Default stays the same at 1,000.
-
-The intention with this data model is that an interactive edit that supplies a
-revision identifier of a deleted leaf will always fail with a conflict. This is
-a subtle departure from CouchDB 2.3 behavior, where an attempt to extend a
-deleted edit branch can succeed if some other `deleted=false` edit branch
-exists. This is an undocumented and seemingly unintentional behavior. If we need
-to match that behavior it will require reading 3 KVs in 2 roundtrips for *every*
-edit that we reject with a conflict.
-
-## Modules affected
-
-TBD depending on exact code layout going forward, but the `couch_key_tree`
-module contains the current revision tree implementation.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-None.
-
-## Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list
-discussion](https://lists.apache.org/thread.html/853b86f3a83108745af510959bb381370a99988af4528617bdbe1be4@%3Cdev.couchdb.apache.org%3E)
-
-[apache/couchdb#1957](https://github.com/apache/couchdb/issues/1957) (originally
-submitted RFC as an issue in the main project repo instead of a PR here).
-
-# Acknowledgements
-
-Thanks to @iilyak, @davisp, @janl, @garrensmith and @rnewson for comments on the
-mailing list discussion.
diff --git a/rfcs/002-shard-splitting.md b/rfcs/002-shard-splitting.md
deleted file mode 100644
index 54b0727..0000000
--- a/rfcs/002-shard-splitting.md
+++ /dev/null
@@ -1,373 +0,0 @@
----
-name: Shard Splitting
-about: Introduce Shard Splitting to CouchDB
-title: 'Shard Splitting'
-labels: rfc, discussion
-assignees: '@nickva'
-
----
-
-# Introduction
-
-This RFC proposes adding the capability to split shards to CouchDB. The API and
-the internals will also allow for other operations on shards in the future such
-as merging or rebalancing.
-
-## Abstract
-
-Since CouchDB 2.0 clustered databases have had a fixed Q value defined at
-creation. This often requires users to predict database usage ahead of time
-which can be hard to do. A too low of a value might result in large shards,
-slower performance, and needing more disk space to do compactions.
-
-It would be nice to start with a low Q initially, for example Q=1, and as
-usage grows to be able to split some shards that grow too big. Especially
-with partitioned queries being available there will be a higher chance
-of having uneven sized shards and so it would be beneficial to split the
-larger ones to even out the size distribution across the cluster.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-*resharding* : Manipulating CouchDB shards. Could be splitting, merging,
-rebalancing or other operations. This will be used as the top-level API
-endpoint name with the idea that in the future different types of shard
-manipulation jobs would be added.
-
----
-
-# Detailed Description
-
-From the user's perspective there would be a new HTTP API endpoint -
-`_reshard/*`. A POST request to `_reshard/jobs/` would start resharding jobs.
-Initially these will be of just one "type":"split" but in the future other
-types could be added.
-
-Users would then be able to monitor the state of these jobs to inspect their
-progress, see when they completed or failed.
-
-The API should be designed to be consistent with `_scheduler/jobs` as much as
-possible since that is another recent CouchDB's API exposing an internal jobs
-list.
-
-Most of the code implementing this would live in the mem3 application with some
-lower level components in the *couch* application. There will be a new child in
-the *mem3_sup* supervisor responsible for resharding called *mem3_reshard_sup*.
-It will have a *mem3_reshard* manager process which should have an Erlang API
-for starting jobs, stopping jobs, removing them, and inspecting their state.
-Individual jobs would be instances of a gen_server defined in
-*mem3_reshard_job* module. There will be simple-one-for-one supervisor under
-*mem3_reshard_sup* named *mem3_reshard_job_sup* to keep track of
-*mem3_reshard_job* children .
-
-An individual shard splitting job will follow roughly these steps in order:
-
-- **Create targets**. Targets are created. Some target properties should match
-  the source. This means matching the PSE engine if source uses a custom one.
-  If source is partitioned, targets should be partitioned as well, etc.
-
-- **Initial bulk copy.** After the targets are created, copy all the document
-  in the source shard to the targets. This operation should be as optimized as
-  possible as it could potentially copy tens of GBs of data. For this reason
-  this piece of code will be closer the what the compactor does.
-
-- **Build indices**. The source shard might have had up-to-date indices and so
-  it is beneficial for the split version to have them as well. Here we'd
-  inspect all `_design` docs and rebuild all the known indices. After this step
-  there will be a "topoff" step to replicate any change that might have
-  occurred on the source while the indices were built.
-
-- **Update shard map**. Here the global shard map is updated to remove the old
-  source shard and replace it with the targets. There will be a corresponding
-  entry added into the shard's document `changelog entry` indicating that a
-  split happened. To avoid conflicts being generated when multiple copies of a
-  range finish splitting and race to update the shard map. All shard map
-  updates will be routes through one consistently picked node (lowest in the
-  list connected nodes when they are sorted). After shard map is updated. There
-  will be another topoff replication job to bring in changes from the source
-  shard to the targets that might have occurred while the shard map was
-  updating.
-
-- **Delete source shard**
-
-This progression of split states will be visible when inspecting a job's status
-as well as in the history in the `detail` field of each event.
-
-
-# Advantages and Disadvantages
-
-Main advantage is to dynamically change shard size distribution on a cluster in
-response to changing user requirements without having to delete and recreate
-databases.
-
-One disadvantage is that it might break some basic constraints about all copies
-of a shard range being the same size. A user could choose to split for example
-a shard copy 00..-ff... on node1 only so on node2 and node3 the copy will be
-00-..ff.. but on node1 there will now be 00-..7f.. and 80-ff... External
-tooling inspecting $db/_shards endpoint might need to be updated to handle this
-scenario. A mitigating factor here is that resharding in the current proposal
-is not automatic it is an operation triggered manually by the users.
-
-# Key Changes
-
-The main change is the ability to split shard via the `_reshard/*` HTTP API
-
-## Applications and Modules affected
-
-Most of the changes will be in the *mem3* application with some changes in the *couch* application as well.
-
-## HTTP API additions
-
-`* GET /_reshard`
-
-Top level summary. Besides the new _reshard endpoint, there `reason` and the stats are more detailed.
-
-Returns
-
-```
-{
-    "completed": 3,
-    "failed": 4,
-    "running": 0,
-    "state": "stopped",
-    "state_reason": "Manual rebalancing",
-    "stopped": 0,
-    "total": 7
-}
-```
-
-* `PUT /_reshard/state`
-
-Start or stop global rebalacing.
-
-Body
-```
-{
-    "state": "stopped",
-    "reason": "Manual rebalancing"
-}
-```
-
-Returns
-
-```
-{
-    "ok": true
-}
-```
-
-* `GET /_reshard/state`
-
-Return global resharding state and reason.
-
-```
-{
-    "reason": "Manual rebalancing",
-    "state": "stopped"
-}
-```
-
-* `GET /_reshard/jobs`
-
-Get the state of all the resharding jobs on the cluster. Now we have a detailed
-state transition history which looks similar what _scheduler/jobs have.
-
-```
-{
-    "jobs": [
-        {
-            "history": [
-                {
-                    "detail": null,
-                    "timestamp": "2019-02-06T22:28:06Z",
-                    "type": "new"
-                },
-                ...
-                {
-                    "detail": null,
-                    "timestamp": "2019-02-06T22:28:10Z",
-                    "type": "completed"
-                }
-            ],
-            "id": "001-0a308ef9f7bd24bd4887d6e619682a6d3bb3d0fd94625866c5216ec1167b4e23",
-            "job_state": "completed",
-            "node": "node1@127.0.0.1",
-            "source": "shards/00000000-ffffffff/db1.1549492084",
-            "split_state": "completed",
-            "start_time": "2019-02-06T22:28:06Z",
-            "state_info": {},
-            "target": [
-                "shards/00000000-7fffffff/db1.1549492084",
-                "shards/80000000-ffffffff/db1.1549492084"
-            ],
-            "type": "split",
-            "update_time": "2019-02-06T22:28:10Z"
-        },
-        {
-           ....
-        },
-   ],
-   "offset": 0,
-   "total_rows": 7
-}
-```
-
-* `POST /_reshard/jobs`
-
-Create a new resharding job. This can now take other parameters and can split multiple ranges.
-
-To split one shard on a particular node
-
-```
-{
-    "type": "split",
-    "shard": "shards/80000000-bfffffff/db1.1549492084"
-    "node": "node1@127.0.0.1"
-}
-```
-
-To split a particular range on all nodes:
-
-```
-{
-     "type": "split",
-     "db" : "db1",
-     "range" : "80000000-bfffffff"
-}
-```
-
-To split a range on just one node:
-
-```
-{
-     "type": "split",
-     "db" : "db1",
-     "range" : "80000000-bfffffff",
-     "node": "node1@127.0.0.1"
-}
-```
-
-To split all ranges of a db on one node:
-
-```
-{
-     "type": "split",
-     "db" : "db1",
-     "node": "node1@127.0.0.1"
-}
-```
-
-Result now may contain multiple job IDs
-
-```
-[
-    {
-        "id": "001-d457a4ea82877a26abbcbcc0e01c4b0070027e72b5bf0c4ff9c89eec2da9e790",
-        "node": "node1@127.0.0.1",
-        "ok": true,
-        "shard": "shards/80000000-bfffffff/db1.1549986514"
-    },
-    {
-        "id": "001-7c1d20d2f7ef89f6416448379696a2cc98420e3e7855fdb21537d394dbc9b35f",
-        "node": "node1@127.0.0.1",
-        "ok": true,
-        "shard": "shards/c0000000-ffffffff/db1.1549986514"
-    }
-]
-```
-
-* `GET /_reshard/jobs/$jobid`
-
-Get just one job by its ID
-
-```
-{
-    "history": [
-        {
-            "detail": null,
-            "timestamp": "2019-02-12T16:55:41Z",
-            "type": "new"
-        },
-        {
-            "detail": "Shard splitting disabled",
-            "timestamp": "2019-02-12T16:55:41Z",
-            "type": "stopped"
-        }
-    ],
-    "id": "001-d457a4ea82877a26abbcbcc0e01c4b0070027e72b5bf0c4ff9c89eec2da9e790",
-    "job_state": "stopped",
-    "node": "node1@127.0.0.1",
-    "source": "shards/80000000-bfffffff/db1.1549986514",
-    "split_state": "new",
-    "start_time": "1970-01-01T00:00:00Z",
-    "state_info": {
-        "reason": "Shard splitting disabled"
-    },
-    "target": [
-        "shards/80000000-9fffffff/db1.1549986514",
-        "shards/a0000000-bfffffff/db1.1549986514"
-    ],
-    "type": "split",
-    "update_time": "2019-02-12T16:55:41Z"
-}
-```
-
-* `GET /_reshard/jobs/$jobid/state`
-
-Get the running state of a particular job only
-
-```
-{
-    "reason": "Shard splitting disabled",
-    "state": "stopped"
-}
-```
-
-* `PUT /_reshard/jobs/$jobid/state`
-
-Stop or resume a particular job
-
-Request body
-
-```
-{
-     "state": "stopped",
-     "reason": "Pause this job for now"
-}
-```
-
-
-## HTTP API deprecations
-
-None
-
-# Security Considerations
-
-None.
-
-# References
-
-Original RFC-as-an-issue:
-
-https://github.com/apache/couchdb/issues/1920
-
-Most of the discussion regarding this has happened on the `@dev` mailing list:
-
-https://mail-archives.apache.org/mod_mbox/couchdb-dev/201901.mbox/%3CCAJd%3D5Hbs%2BNwrt0%3Dz%2BGN68JPU5yHUea0xGRFtyow79TmjGN-_Sg%40mail.gmail.com%3E
-
-https://mail-archives.apache.org/mod_mbox/couchdb-dev/201902.mbox/%3CCAJd%3D5HaX12-fk2Lo8OgddQryZaj5KRa1GLN3P9LdYBQ5MT0Xew%40mail.gmail.com%3E
-
-
-# Acknowledgments
-
-@davisp @kocolosk : Collaborated on the initial idea and design
-
-@mikerhodes @wohali @janl @iilyak : Additionally collaborated on API design
diff --git a/rfcs/003-fdb-seq-index.md b/rfcs/003-fdb-seq-index.md
deleted file mode 100644
index 50634af..0000000
--- a/rfcs/003-fdb-seq-index.md
+++ /dev/null
@@ -1,244 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Data Model and Index Management for _changes in FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-Data Model and Index Management for `_changes` in FoundationDB
-
-## Abstract
-
-This document describes how to implement the `by_seq` index that supports the
-`_changes` endpoints in FoundationDB. It covers the data model, index
-maintenance, and access patterns.
-
-The basic data model is one where the key is a `Sequence` (as defined below) and
-the value is a document ID, revision, and branch count.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`Versionstamp`: a 12 byte, unique, monotonically (but not sequentially)
-increasing value for each committed transaction. The first 8 bytes are the
-committed version of the database. The next 2 bytes are monotonic in the
-serialization order for transactions. The final 2 bytes are user-defined and can
-be used to create multiple versionstamps in a single transaction.
-
-`Incarnation`: a monotonically increasing Integer value specified for each
-CouchDB database. The `Incarnation` starts at zero (i.e. `\x14` in the tuple
-layer encoding) when a database is created and is incremented by one whenever a
-database is relocated to a different FoundationDB cluster. Thus the majority of
-the time an Incarnation fits into a single byte, or two bytes if the database
-has been moved around a small number of times.
-
-`Sequence`: the combinination of the current `Incarnation` for the database and
-the `Versionstamp` of the transaction. Sequences are monotonically increasing
-even when a database is relocated across FoundationDB clusters.
-
-`style=all_docs`: An optional query parameter to the `_changes` feed which
-requests that all leaf revision ids are included in the response. The replicator
-(one of the most frequent consumers of `_changes`) supplies this parameter.
-
----
-
-# Detailed Description
-
-The `_changes` feed provides a list of the documents in a given database, in the
-order in which they were most recently updated. Each document shows up exactly
-once in a normal response to the `_changes` feed.
-
-In CouchDB 2.x and 3.x the database sequence is a composition of sequence
-numbers from individual database shards. In the API this sequence is encoded as
-a long Base64 string. The response to the `_changes` feed is not totally
-ordered; the only guarantee is that a client can resume the feed from a given
-sequence and be guaranteed not to miss any updates.
-
-Future releases of CouchDB based on FoundationDB will be able to offer stronger
-guarantees. The `Sequence` defined in the Terminology section above is totally
-ordered across the entire cluster, and repeated calls to `_changes` on a
-quiescent database will retrieve the same results in the same order. The
-`Sequence` will still be encoded as a string, but as it's a more compact value
-we propose to encode it in hexademical notation. These strings will sort
-correctly, something that has not always been true in CouchDB 2.x.
-
-## Data Model
-
-Each database will contain a `changes` subspace with keys and values that take
-the form
-
-`("changes", Sequence) = (SeqFormat, DocID, RevPosition, RevHash, BranchCount,
-NotDeleted)`
-
-where the individual elements are defined as follows:
-
-- `SeqFormat`: enum for the value encoding, to enable schema evolution
-- `DocID`: the document ID
-- `RevPosition`: positive integer encoded using standard tuple layer encoding
-  (signed, variable-length, order-preserving)
-- `RevHash`: 16 bytes uniquely identifying the winning revision of this document
-- `Sequence`: the sequence of the last transaction that modified the document
-  (NB: not necessarily the transaction that produced the `RevPosition-RevHash`
-  edit).
-- `BranchCount`: the number of edit branches associated with this document
-- `NotDeleted`: `\x26` if the leaf of the edit branch is deleted, `\x27`
-  otherwise (following tuple encoding for booleans)
-
-A typical response to `_changes` includes all of this information in each row
-except the internal `SeqFormat` and the `BranchCount`. The latter is used as an
-optimization for the `style=all_docs` request; if this parameter is specified
-and the `BranchCount` is 1 we can avoid making an extra request to the
-"revisions" space to discover that there are no other revisions to include.
-
-## Index Maintenance
-
-As discussed in [RFC 001](001-fdb-revision-metadata-model.md), an update attempt
-always retrieves the metadata KV for the current winning branch from the
-"revisions" subspace. This metadata entry includes the sequence of the last edit
-to the document, which serves as the key into the index in our "changes"
-subspace. The writer will use that information to clear the existing KV from the
-`_changes` subspace as part of the transaction.
-
-The writer also knows in all cases what the `RevPosition`, `RevHash`,
-`BranchCount`, and `NotDeleted` will be following the edit, and can use the
-`set_versionstamped_key` API to write a new KV with the correct new sequence of
-the transaction into the "changes" subspace.
-
-In short, the operations in this subspace are
-- doc insert: 0 read, 0 clear, 1 insert
-- doc update: 0 read, 1 clear, 1 insert
-
-## Handling of Unkown Commit Results
-
-When using versionstamped keys as proposed in this RFC one needs to pay
-particular care to the degraded mode when FoundationDB responds to a transaction
-commit with `commit_unknown_result`. Versionstamped keys are not idempotent, and
-so a naïve retry approach could result in duplicate entries in the "changes"
-subspace. The index maintenance in this subspace is "blind" (i.e. no reads in
-this subspace are performed), so the risk for duplicate entries is indeed a
-valid concern.
-
-We can guard against creating duplicates in the "changes" subspace by having the
-transaction that updates that subpsace also insert a KV into a dedicated
-"transaction ID" subspace specifically corresponding to this document update. If
-the CouchDB layer receives a `commit_unknown_result` it can simply check for the
-presence of the transaction ID in FoundationDB to determine whether the previous
-transaction succeeded or failed. If the transaction ID is not present, CouchDB
-can safely retry with the same transaction ID. After a successful transaction
-commit, the CouchDB layer can delete the transaction ID KV asynchronously. For
-example, each process could dump the transaction ID of a successful commit into
-a local ets table (shared by all databases), and a process could scan that table
-once every few seconds and clear the associated entries from FDB in a single
-transaction.
-
-## Access Patterns
-
-Let's consider first the simple case where an entire response to `_changes` fits
-within a single FoundationDB transaction (specifically the 5 second limit). In
-this case a normal request to `_changes` can be satisfied with a single range
-read from the "changes" subspace. A `style=all_docs` request will need to check
-the `BranchCount` for each row; if it's larger than 1, the client will need to
-do a followup range request against the "revisions" subspace to retrieve the
-additional revision identifiers to include in the response. A request with
-`include_docs=true` will need to make a separate range request to the doc
-storage subpsace to retrieve the body of each winning document revision.
-
-If a normal response to `_changes` cannot be delivered in a single transaction
-the CouchDB layer should execute multiple transactions in series and stitch the
-responses together as needed. Note that this opens up a subtle behavior change
-from classic CouchDB, where a single database snapshot could be held open
-~indefinitely for each shard, providing a complete snapshot of the database as
-it existed at the *beginning* of the response. While future enhancements in
-FoundationDB may allow us to recover that behavior, in the current version we
-may end up with duplicate entries for individual documents that are updated
-during the course of streaming the `_changes` response. The end result will be
-that each document in the database shows up at least once, and if you take the
-last entry for each document that you observe in the feed, you'll have the state
-of the database as it existed at the *end* of the response.
-
-Finally, when a user requests `_changes` with `feed=continuous` there is no
-expectation of exactly-once semantics, and in fact this is implemented using
-multiple database snapshots for each shard today. The extra bit of work with
-this response type is to efficiently discover when a new read of the "changes"
-subspace for a given database is required in FoundationDB. A few different
-options have been discussed on the mailing list:
-
-1. Writers publish `db_updated` events to `couch_event`, listeners use
-   distributed Erlang to subscribe to all nodes, similar to the classic
-   approach.
-1. Poll the `_changes` subspace, scale by nominating a specific process per node
-   to do the polling.
-1. Same as above but using a watch on DB metadata that changes with every update
-   instead of polling.
-
-This RFC proposes to pursue the second approach. It preserves the goal of a
-stateless CouchDB layer with no coordination between instances, and has a
-well-known scalability and performance profile.
-
-# Advantages
-
-This design eliminates "rewinds" of the `_changes` feed due to cluster
-membership changes, and enhances database sequences to enable relocation of
-logical CouchDB databases across FoundationDB clusters without rewinds as well.
-
-We anticipate improved throughput due to the more compact encoding of database
-sequences.
-
-The new sequence format always sorts correctly, which simplifies the job of
-consumers tracking the sequence from which they should resume in parallel
-processing environments.
-
-# Disadvantages
-
-It will not be possible to retrieve a complete point-in-time snapshot of a large
-database in which each document appears exactly once. This may change with a
-future enhancement to the storage engine underpinning FoundationDB.
-
-# Key Changes
-
-Nothing additional to report here.
-
-## Applications and Modules affected
-
-TBD depending on exact code layout going forward, but this functionality cuts
-across several core modules of CouchDB.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-None.
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list
-discussion](https://lists.apache.org/thread.html/29d69efc47cb6328977fc1c66efecaa50c5d93a2f17aa7a3392211af@%3Cdev.couchdb.apache.org%3E)
-
-[Detailed thread on isolation semantics for long
-responses](https://lists.apache.org/thread.html/a4429197919e66ef0193d128872e17b3b62c1f197918df185136b35d@%3Cuser.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-Thanks to @iilyak, @rnewson, @mikerhodes, @garrensmith and @alexmiller-apple for
-comments on the mailing list discussions, and to @wohali for working through the
-implications of the isolation changes on IRC.
diff --git a/rfcs/004-document-storage.md b/rfcs/004-document-storage.md
deleted file mode 100644
index bbfd8c6..0000000
--- a/rfcs/004-document-storage.md
+++ /dev/null
@@ -1,251 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'JSON document storage in FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-This document describes a data model for storing JSON documents as key-value
-pairs in FoundationDB. It includes a discussion of storing multiple versions of
-the document, each identified by unique revision identifiers, and discusses some
-of the operations needed to query and modify these documents.
-
-## Abstract
-
-The data model maps each "leaf" JSON value (number, string, true, false, and
-null) to a single KV in FoundationDB. Nested relationships are modeled using a
-tuple structure in the keys. Different versions of a document are stored
-completely independently from one another. Values are encoded using
-FoundationDB's tuple encoding.
-
-The use of a single KV pair for each leaf value implies a new 100KB limit on
-those values stored in CouchDB documents. An alternative design could split
-these large (string) values across multiple KV pairs.
-
-Extremely deeply-nested data structures and the use of long names in the nesting
-objects could cause a path to a leaf value to exceed FoundationDB's 10KB limit
-on key sizes. String interning could reduce the likelihood of this occurring but
-not eliminate it entirely. Interning could also provide some significant space
-savings in the current FoundationDB storage engine, although the introduction of
-key prefix elision in the Redwood engine should also help on that front.
-
-FoundationDB imposes a hard 10MB limit on transactions. In order to reserve
-space for additional metadata, user-defined indexes, and generally drive users
-towards best practices in data modeling this RFC proposes a **1MB (1,000,000
-byte)** limit on document sizes going forward.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
----
-
-# Detailed Description
-
-## Value Encoding
-
-The `true` (`\x27`), `false` (`\x26`) and `null` (`\x00`) values each have a
-single-byte encoding in FoundationDB's tuple layer. Integers are represented
-with arbitrary precision (technically, up to 255 bytes can be used).
-Floating-point numbers use an IEEE binary representation up to double precision.
-More details on these specific byte codes are available in the [FoundationDB
-documentation](https://github.com/apple/foundationdb/blob/6.0.18/design/tuple.md).
-
-Unicode strings must be encoded into UTF-8. They are prefixed with a `\x02`
-bytecode and are null-terminated. Any nulls within the string must be replaced
-by `\x00\xff`. Raw byte strings have their own `\x01` prefix and must follow the
-same rules regarding null bytes in the string. Both are limited to 100KB.
-
-An object is decomposed into multiple key-value pairs, where each key is a tuple
-identifying the path to a final leaf value. For example, the object
-
-```
-{
-    "foo": {
-        "bar": {
-            "baz": 123
-        }
-    }
-}
-```
-
-would be represented by a key-value pair of
-
-```
-pack({"foo", "bar", "baz"}) = pack({123})
-```
-
-Clients SHOULD NOT submit objects containing duplicate keys, as CouchDB will
-only preserve  the last occurence of the key and will silently drop the other
-occurrences. Similarly, clients MUST NOT rely on the ordering of keys within an
-Object as this ordering will generally not be preserved by the database.
-
-An array of N elements is represented by N distinct key-value pairs, where the
-last element of the tuple key is an integer representing the zero-indexed
-position of the value within the array. As an example:
-
-```
-{
-    "states": ["MA", "OH", "TX", "NM", "PA"]
-}
-```
-
-becomes
-
-```
-pack({"states", 0}) = pack({"MA"})
-pack({"states", 1}) = pack({"OH"})
-pack({"states", 2}) = pack({"TX"})
-pack({"states", 3}) = pack({"NM"})
-pack({"states", 4}) = pack({"PA"})
-```
-
-More details on the encodings in the FoundationDB Tuple Layer can be found in
-the [design
-documentation](https://github.com/apple/foundationdb/blob/6.0.18/design/tuple.md).
-
-## Document Subspace and Versioning
-
-Document bodies will be stored in their own portion of the keyspace with a fixed
-single-byte prefix identifying the "subspace". Each revision of a document will
-be stored separately without term sharing, and the document ID and revision ID
-are baked into the key. The structure looks like this
-
-```
-{DbName, ?DOCUMENTS, DocID, NotDeleted, RevPos, RevHash} = RevisionMetadata
-{DbName, ?DOCUMENTS, DocID, NotDeleted, RevPos, RevHash, "foo"} = (value for doc.foo)
-et cetera
-```
-
-where `RevisionMetadata` includes at the minium an enum to enable schema
-evolution for subsequent changes to the document encoding structure, and
-`NotDeleted` is `true` if this revision is a typical `deleted=false` revision,
-and `false` if the revision is storing user-supplied data associated with the
-tombstone. Regular document deletions without any data in the tombstone do not
-show up in the `?DOCUMENTS` subspace at all. This key structure ensures that in
-the case of multiple edit branches the "winning" revision's data will sort last
-in the key space.
-
-## CRUD Operations
-
-FoundationDB transactions have a hard limit of 10 MB each. Our document
-operations will need to modify some metadata alongside the user data, and we'd
-also like to reserve space for updating indexes as part of the same transaction.
-This document proposes to limit the maximum document size to **1 MB (1,000,000
-bytes)** going forward (excluding attachments).
-
-A document insert does not need to clear any data in the `?DOCUMENTS` subspace,
-and simply inserts the new document content. The transaction will issue a read
-against the `?REVISIONS` subspace to ensure that no `NotDeleted` revision
-already exists.
-
-A document update targeting a parent revision will clear the entire range of
-keys associated with the parent revision in the `?DOCUMENTS` space as part of
-its transaction. Again, the read in the `?REVISIONS` space ensures that this
-transaction can only succeed if the parent revision is actually a leaf revision.
-
-Document deletions are a special class of update that typically do not insert
-any keys into the `?DOCUMENTS` subspace. However, if a user includes extra
-fields in the deletion they will show up in this subspace.
-
-Document reads where we already know the specific revision of interest can be
-done efficiently using a single `get_range_startswith` operation. In the more
-common case where we do not know the revision identifier, there are two basic
-options:
-
-1. We can retrieve the winning revision ID from the `?REVISIONS` subspace, then
-   execute a `get_range_startswith` operation as above.
-1. We can start streaming the entire key range from the `?DOCUMENTS` space
-   prefixed by `DocID` in reverse, and break if we reach another revision of the
-   document ID besides the winning one.
-
-Document reads specifying `conflicts`, `deleted_conflicts`, `meta`, or
-`revs_info` will need to retrieve the revision metadata from the `?REVISIONS`
-subspace alongside the document body regardless of which option we pursue above.
-
-If a reader is implementing Option 2 and does not find any keys associated with
-the supplied `DocID` in the `?DOCUMENTS` space, it will need to do a followup
-read on the `?REVISIONS` space in order to determine whether the appropriate
-response is `{"not_found": "missing"}` or `{"not_found": "deleted"}`.
-
-# Advantages and Disadvantages
-
-A leading alternative to this design in the mailing list discussion was to
-simply store each JSON document as a single key-value pair. Documents exceeding
-the 100KB value threshold would be chunked up into contiguous key-value pairs.
-The advantages of this "exploded" approach are
-
-- it lends itself nicely to sub-document operations, e.g. apache/couchdb#1559
-- it optimizes the creation of Mango indexes on existing databases since we only
-  need to retrieve the value(s) we want to index
-- it optimizes Mango queries that use field selectors
-
-The disadvantages of this approach are that it uses a larger number of key-value
-pairs and has a higher overall storage overhead from the repeated common key
-prefixes. The new FoundationDB storage engine should eliminate some of the
-storage overhead.
-As per the FoundationDB discussion about being able to co-locate compute operations with data storage servers/nodes](https://forums.foundationdb.org/t/feature-request-predicate-pushdown/954/6), if we were to make use of this hypothetical feature, we’d not get a guarantee of entire documents being co-located on one storage node, requiring us to do extra work should we want to, say, assemble a full `doc` to send to a map function. JS views would have a harder time, while Mango indexes with [...]
-
-
-# Key Changes
-
-- Individual strings within documents are limited to 100 KB each.
-- The "path" to a leaf value within a document can be no longer than 10 KB.
-- The entire JSON document is limited to 1 MiB.
-
-Size limitations aside, this design preserves all of the existing API options
-for working with CouchDB documents.
-
-## Applications and Modules affected
-
-TBD depending on exact code layout going forward.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-None, aside from the more restrictive size limitations discussed in the Key
-Changes section above.
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list discussion](https://lists.apache.org/thread.html/fb8bdd386b83d60dc50411c51c5dddff7503ece32d35f88612d228cc@%3Cdev.couchdb.apache.org%3E)
-
-[Draft RFC for revision metadata](https://github.com/apache/couchdb-documentation/blob/rfc/001-fdb-revision-model/rfcs/001-fdb-revision-metadata-model.md)
-
-[Current version of Tuple Layer documentation](https://github.com/apple/foundationdb/blob/6.0.18/design/tuple.md)
-
-# Acknowledgements
-
-We had lots of input on the mailing list in this discussion, thanks to
-
-- @banjiewen
-- @davisp
-- @ermouth
-- @iilyak
-- @janl
-- @mikerhodes
-- @rnewson
-- @vatamane
-- @wohali
-- Michael Fair.
-- Reddy B.
diff --git a/rfcs/005-all-docs-index.md b/rfcs/005-all-docs-index.md
deleted file mode 100644
index c368e5a..0000000
--- a/rfcs/005-all-docs-index.md
+++ /dev/null
@@ -1,207 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: Implementation of _all_docs DB info metadata in FoundationDB
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-## Abstract
-
-This document describes how to maintain an index of all the documents in a
-database backed by FoundationDB, one sufficient to power the _all_docs endpoint.
-It also addresses the individual metadata fields included in the response to a
-GET /dbname request.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their definitions here.)
-
----
-
-# Detailed Description
-
-## _all_docs
-
-Normal requests to the `_all_docs` index will be powered by a dedicated subspace
-containing a single key for each document in the database that has at least one
-deleted=false entry in the revisions subspace. This dedicated subspace can be
-populated by blind writes on each update transaction, as the revisions subspace
-ensures proper coordination of concurrent writers trying to modify the same
-document. The structure of the keys in this space looks like
-
-```
-(?BY_ID, DocID) = (ValueFormat, RevPosition, RevHash)
-```
-
-where the individual elements are defined as follows:
-
-* ValueFormat: enum for the value encoding, to enable schema evolution
-* DocID: the document ID
-* RevPosition: positive integer encoded using standard tuple layer encoding
-* RevHash: 16 bytes uniquely identifying the winning revision of this document
-
-If a transaction deletes the last "live" edit branch of a document, it must also
-clear the corresponding entry for the document from this subspace.
-
-A request that specifies `include_docs=true` can be implemented either by
-performing a range request against this subspace and then N additional range
-requests explicitly specifying the full revision information in the ?DOCS
-subspace, or by doing a full range scan directly against that subspace,
-discarding conflict bodies and any user data associated with deleted revisions.
-As the implementation choice there has no bearing on the actual data model we
-leave it unspecified in this RFC.
-
-## dbinfo
-
-The so-called "dbinfo" JSON object contains various bits of metadata about a
-database. Here's how we'll carry those forward:
-
-`db_name`: should be trivially accessible.
-
-`doc_count`: this will be maintained as a single key mutated using
-FoundationDB's atomic operations. Transactions that create a new document or
-re-create one where all previous edit branches had been deleted should increment
-the counter by 1.
-
-`doc_del_count`: as above, this is a key mutated using atomic operations.
-Transactions that tombstone the last deleted=false edit branch on a document
-should increment it by 1. Transactions that add a new deleted=false edit branch
-to a document where all previous edit branches were deleted must decrement it by
-1.
-
-The revisions model ensures that every transaction has enough information to
-know whether it needs to modify either or both of the above counters.
-
-`update_seq`: the most efficient way to retrieve this value is to execute a
-`get_key` operation using a `last_less_than` KeySelector on the end of the
-?CHANGES subspace, so no additional writes are required.
-
-`purge_seq`: TBD on a more detailed design for purge. If it ends up being
-entirely transactional then this could be fixed to `update_seq` or dropped
-entirely.
-
-### Data Sizes
-
-There are three distinct sizes that we currently track for every database:
-
-* `sizes.external`: described as the "number of bytes that would be required to
-  represent the contents outside of the database".
-* `sizes.active`: a theoretical minimum number of bytes to store this database
-  on disk.
-* `sizes.file`: the current number of bytes on disk.
-
-The relationship between `sizes.active` and `sizes.file` is used to guide
-decisions on database compaction. FoundationDB doesn't require compaction, and
-any distinction that might exist between these two quantities (e.g. from storage
-engine compression) is not surfaced up to the clients, so it probably doesn't
-make sense to have both.
-
-The current implementation of `sizes.external` does *not* measure the length of
-a JSON representation of the data, but rather the size of an uncompressed Erlang
-term representation of the JSON. This is a somewhat awkward choice as the
-internal Erlang term representation is liable to change over time (e.g. with the
-introduction of Maps in newer Erlang releases, or plausibly even a JSON decoder
-that directly emits the format defined in the document storage RFC).
-
-Assuming we can agree on a set of sizes and how they should be calculated, the
-implementation will require two pieces: a single key for each size, mutated by
-atomic operations, and a record of the size of each revision in the ?REVISIONS
-subpsace so that a transaction can compute the delta for each document.
-
-### Clustering
-
-The `r`, `w`, `q`, and `n` values in the `cluster` object were introduced in
-CouchDB 2.x to describe the topology of a database and the default quorum
-settings for operations against it. If we wanted to bring these forward, here's
-how they'd be defined:
-
-* `r`: always fixed at 1
-
-* `w`: interpreted as the number of transaction logs that record a commit, this
-  is dependent on the `redundancy mode` for the underlying FoundationDB database
-
-* `n`: interpreted as number of storage servers that host a key, this is also
-  dependent on the `redundancy mode` for the underlying FoundationDB database
-
-* `q`: the closest analogue here would be to use the `get_boundary_keys` API and
-  report number of distinct ranges implied by the boundary keys
-
-This interpretation could lead to some surprises, though. For example, "r=1,
-w=4, n=3" is a popular configuration, but this is nonsensical for someone
-expecting to see Dynamo-style numbers. Ignoring backwards compatibility, the
-sensible thing is to point users toward the actual FoundationDB configuration
-information, and to deprecate this entire `cluster` object. Open for discussion.
-
-# Advantages and Disadvantages
-
-[NOTE]: # ( Briefly, list the benefits and drawbacks that would be realized should )
-[NOTE]: # ( the proposal be accepted for inclusion into Apache CouchDB. )
-
-# Key Changes
-
-The underlying transaction in FoundationDB must complete within 5 seconds, which
-implicitly limits the number of results that can be returned in a single
-_all_docs invocation.
-
-## Applications and Modules affected
-
-TBD depending on exact code layout going forward.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-The `total_rows` and `offset` fields are removed from the response to
-`_all_docs`, which now has the simpler form
-
-    {"rows": [
-        {"id":"foo", "key":"foo", "value":{"rev":"1-deadbeef..."}},
-        ...
-    ]}
-
-The following fields are removed in the dbinfo response:
-
-* `compact_running`
-
-* `disk_format_version`: this is a tricky one. We define "format versions" for
-  every single type of key we're storing in FoundationDB, and those versions
-  could vary on a key-by-key basis, so listing a single number for an entire
-  database is sort of ill-posed. 
-
-
-The following fields are already marked as deprecated and can be removed in the
-next major release, independent of the FoundationDB work:
-
-* `instance_start_time`
-* `other`
-* `data_size`
-* `disk_size`
-
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[TIP]:  # ( Include any references to CouchDB documentation, mailing list discussion, )
-[TIP]:  # ( external standards or other links here. )
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
\ No newline at end of file
diff --git a/rfcs/006-mango-fdb.md b/rfcs/006-mango-fdb.md
deleted file mode 100644
index 19f5f02..0000000
--- a/rfcs/006-mango-fdb.md
+++ /dev/null
@@ -1,149 +0,0 @@
-# Mango RFC
-
----
-
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: ‘Mango JSON indexes in FoundationDB’
-labels: rfc, discussion
-assignees: ‘’
-
----
-
-[note]: # " ^^ Provide a general summary of the RFC in the title above. ^^ "
-
-# Introduction
-
-This document describes the data model, querying and indexing management for Mango JSON indexes with FoundationDB.
-
-## Abstract
-
-This document details the data model for storing Mango indexes. Indexes will be updated in the transaction that a document is written to FoundationDB. When an index is created on an existing database, a background task will build the index up to the Sequence that the index was created at.
-
-## Requirements Language
-
-[note]: # " Do not alter the section below. Follow its instructions. "
-
-The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
-“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`Sequence`: a 13-byte value formed by combining the current `Incarnation` of the database and the `Versionstamp` of the transaction. Sequences are monotonically increasing even when a database is relocated across FoundationDB clusters. See (RFC002)[LINK TBD] for a full explanation.
-
----
-
-# Detailed Description
-
-Mango is a declarative JSON querying syntax that allows a user to retrieve documents based on a selector. Indexes can be defined to improve query performance. In CouchDB Mango is a query layer built on top of Map/Reduce indexes. Each Mango query follows a two-step process, first a subset of the selector is converted into a map query to be used with a predefined index or falling back to `_all_docs` if no indexes are available. Each document retrieved from the index is then matched against [...]
-
-With CouchDB on FoundationDB, all new created Mango indexes have the `interactive: true` option set. Thereby Mango indexes will be indexed in the same transaction that a document is add/updated to the database.
-
-## Data Model
-
-### Index Definitions
-
-A Mango index is defined as:
-
-```json
-{
-  "name": "view-name",
-  "index": {
-    "fields": ["fieldA", "fieldB"]
-  },
-  "partial_filter_selector": {}
-}
-```
-
-The above index definition would be converted into a map index that looks like this:
-
-```json
-{
-  "_id": "_design/ddoc",
-  "language": "query",
-  "views": {
-    "view-name": {
-      "map": {
-        "fields": [{ "fieldA": "asc" }, { "fieldB": "asc" }],
-        "selector": {}
-      }
-    }
-  },
-  "options": [{ "autoupdate": false }, { "interactive": true }]
-}
-```
-
-- `{"autoupdate": false}` means that the index will not be auto updated in the background
-- `{"interactive": true}` configures the index to be updated in the document update transaction
-
-### Index Definition
-
-Mango indexes are a layer on top of map indexes. So the index definition is the same as the map index definition.
-
-### Index Limits
-
-This design has certain defined limits for it to work correctly:
-
-- The index definition (`name`, `fields` and `partial_filter_selector`) cannot exceed 64 KB FDB value limit
-- The sorted keys for an index cannot exceed the 8 KB key limit
-- To be able to update the index in the transaction that a document is updated in, there will have to be a limit on the number of Mango indexes for a database so that the transaction stays within the 10MB transaction limit. This limit is still TBD based on testing.
-
-## Index building and management
-
-When an index is created on an existing database, the index will be updated in a background job up to the versionstamp that the index was added to the database at. The process for building a new index would be:
-
-1. Save index to the database, along with a creation versionstamp and set the index status to `building` so that is it not used to service any queries until it is updated. Add a job to `couch_jobs` to build the index.
-2. Any write requests (document updates) after the saved index definition will update the index in the document update. Index writers can assume that previous versions of the document have already been indexed.
-3. `couch_jobs` will start reading sections of the changes feed and building the index, this background process will keep processing the changes read until it reaches the creation versionstamp. Once it reaches that point, the index is up to date and `build_status` will be marked as `active` and the index can be used to service queries.
-4. There is some subtle behavior around step 3 that is worth mentioning. The background process will have the 5-second transaction limit, so it will process smaller parts of the changes feed. Which means that it won’t have one consistent view of the changes feed throughout the index building process. This will lead to a conflict situation when the background process transaction is adding a document to the index while at the same time a write request has a transaction that is updating the [...]
-
-## Advantages
-
-- Indexes are kept up to date when documents are changed, meaning you can read your own writes
-- Makes Mango indexes first-class citizens and opens up the opportunity to create more Mango specific functionality
-
-## Disadvantages
-
-- FoundationDB currently does not allow CouchDB to do the document selector matching at the shard level. However, there is a discussion for this [Feature Request: Predicate pushdown](https://forums.foundationdb.org/t/feature-request-predicate-pushdown/954)
-
-## Key Changes
-
-- Mango indexes will be stored separately to Map/Reduce indexes.
-- Mango Indexes will be updated when a document is updated
-- A background process will build a new Mango index on an existing database
-- There are specific index limits mentioned in the Index Limits section.
-
-Index limitations aside, this design preserves all of the existing API options
-for working with CouchDB documents.
-
-## Applications and Modules affected
-
-The `mango` application will be modified to work with FoundationDB
-
-## HTTP API additions
-
-When querying any of the `_index` endpoints an extra field, `build_status`, will be added to the index definition.
-The `build_status` will either be `building` or `active`.
-
-## HTTP API deprecations
-
-None,
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list discussion](https://lists.apache.org/thread.html/b614d41b72d98c7418aa42e5aa8e3b56f9cf1061761f912cf67b738a@%3Cdev.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-thanks to following in participating in the design discussion
-
-- @kocolosk
-- @willholley
-- @janl
-- @alexmiller-apple
diff --git a/rfcs/007-background-jobs.md b/rfcs/007-background-jobs.md
deleted file mode 100644
index a61420a..0000000
--- a/rfcs/007-background-jobs.md
+++ /dev/null
@@ -1,347 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Background jobs with FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-This document describes a data model, implementation, and an API for running
-CouchDB background jobs with FoundationDB.
-
-## Abstract
-
-CouchDB background jobs are used for things like index building, replication
-and couch-peruser processing. We present a generalized model which allows
-creation, running, and monitoring of these jobs.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-
-## General Concepts
-
-In the discussion below a job is considered to be an abstract unit of work. It
-is identified by a `JobId` and has a `JobType`. Client code creates a job which
-is then is executed by a job processor. A job processor is language-specific
-execution unit that runs the job. It could be an Erlang process, a thread, or
-just a function.
-
-The API used to create jobs is called the `Job Creation API` and the API used
-by the job processors to run jobs is called the `Job Processing API`.
-
-### Job States
-
-Jobs in the system can be in 3 states. After a job is added and
-is waiting to run, the job is considered to be `pending`. A job executed by
-a job processor is considered to be `running`. When a job is neither `running`,
-nor `pending`, it is considered to be `finished`. This is the state transition
-diagram:
-
-```
-         +------------>+
-         |             |
-         |             v
- -->[PENDING]     [RUNNING]--->[FINISHED]
-         ^             |           |
-         |             v           |
-         +-------------+<----------+
-```
-
-
-
-### Typical API Usage
-
-The general pattern of using this API might look like:
-
-  * Job creators:
-    - Call `add/4,5` to add a job
-    - Call `remove/3` to remove it
-
-  * Job processors:
-    - Call `accept/1,2` and wait until it gets a job to process.
-    - Periodically call `update/2,3` to prevent the job from being re-enqueued
-      due to idleness.
-    - When done running a job, call `finish/2,3`
-
-
-### Job Creation API
-
-```
-add(Tx, Type, JobId, JobData[, ScheduledTime]) -> ok | {error, Error}
-```
- - Add a job to be executed by a job processor
-   - `JobData` is map with a job type-specific data in it. It MAY contain any
-     data as long as it can be properly encoded as JSON.
-   - `ScheduledTime` is an optional parameter to schedule the job to be executed
-     at a later time. The format is an integer seconds since UNIX epoch.
-   - If the job with the same `JobId` exists:
-      * If it is `pending`, then the `ScheduledTime` is updated.
-      * If it is `running` then the job is flagged to be resubmitted when it finishes running.
-      * If it is `finished` then it will be re-enqueued as `pending`
-
-```
-remove(Tx, Type, JobId) -> ok | {error, Error}
-```
- - Remove a job. If it is running, it will be stopped.
-
-```
-get_job_data(Job) -> {ok, JobData} | {error, Error}
-```
- - Get `JobData` associated with the job.
-
-```
-get_job_state(Job) -> {ok, pending | running | finished} | {error, Error}
-```
- - Get the job's state.
-
-```
-set_type_timeout(Type, TimeoutSec) -> ok
-```
-
- - Set the activity timeout for a job type. This function needs to be called
-   once for each job type before any job of that type is added.
-
-```
-get_type_timeout(Type)  -> {ok, TimeoutSec} | {error, Error}
-```
-
- - Get the type timeout for a job type.
-
-```
-subscribe(Type, JobId) -> {ok, SubscriptionId, JobState}
-```
-
- - Subscribe to receive job state updates. Notifications can be received using
- the `wait/2,3` calls.
-
-```
-unsubscribe(SubscriptionId) -> ok
-```
- - Unsubscribe from receiving job state updates.
-
-```
-wait(SubscriptionId, Timeout) -> {Type, JobId, JobState} | timeout
-wait([SubscriptionId], Timeout) -> {Type, JobId, JobState} | timeout
-
-```
- - Receive subscription notification updates from one or more subscriptions.
-
-```
-wait(SubscriptionId, Type, Timeout) -> {Type, JobId, JobState} | timeout
-wait([SubscriptionId], Type, Timeout) -> {Type, JobId, JobState} | timeout
-
-```
- - Receive subscription notification updates for one particular state only.
-   Updates for any other state will be ignored. This function can be used, for
-   example, to wait until a job has finished running.
-
-
-### Job Processing API
-
-```
-accept(Type[, OptionsMap]) -> {ok, Job} | {error, Error}
-```
-
- - Get a `pending` job and start running it. `OptionsMap` is a map that MAY
-   have these parameters:
-    * `no_schedule` = `true` | `false` Use a more optimized dequeueing strategy
-      if time-based scheduling is not used and job IDs are known to start with
-      a random looking (UUID-like) prefix.
-    * `max_sched_time` = `SecondsSinceEpoch` : Only accept jobs which have been
-      scheduled before or at `SecondsSinceEpoch` UNIX time.
-    * `timeout` = `TimeoutMSec` : Maximum timeout to wait when there are no
-      pending jobs available. `0` means don't wait at all and return `{error,
-      not_found}` immediately, effectively making `accept/1,2` non-blocking.
-
-
-```
-update(Tx, Job[, JobData]) -> {ok, Job} | {error, halt | Error}
-
-```
- - This MAY be called to update a job's `JobData`. It MUST be called at least
-   as often as the configured timeout value for the job’s type. Not doing this
-   will result in the job being re-enqueued. If `halt` is returned, the job
-   processor MUST stop running the job. Job processors MUST call `update/2,3`
-   in any write transactions it performs in order to guarantee mutual exclusion
-   that at most one job processor is executing a particular job at a time.
-
-```
-finish(Tx, Job[, JobData]) -> ok | {error, halt | Error}
-```
- - Called by the job processor when it has finished running the job. The
-   `JobData` parameter MAY contain a final result. If `halt` is returned, it
-   means that the `JobData` value wasn't updated. Job processors MUST call
-   `update/2,3` or `finish/2,3` in any write transactions it performs in order
-   to guarantee mutual exclusion that at most one job processor is executing a
-   particular job at a time.
-
-```
-resubmit(Tx, Job[, ScheduledTime]) -> {ok, Job} | {error, Error}
-```
- - Mark the job for resubmission. The job won't be re-enqueued until
-   `finish/2,3` is called.
-
-```
-is_resubmitted(Job) -> true | false
-```
- - Check if the job object was marked for resubmission. The job processor MAY
-   call this function on the `Job` object that gets returned from the
-   `update/2,3` function to determine if job creator had requested the job to
-   be resubmitted. The job won't actually be re-enqueued until `finish/2,3`
-   function is called.
-
-# Framework Implementation Details
-
-This section discusses how some of the framework functionality is implemented.
-
-All the coordination between job creation and job processing is done via
-FoundationDB. There is a top level `"couch_jobs"` subspace. All the subspaces
-mentioned below will be under this subspace.
-
-Each job managed by the framework will have an entry in the main `jobs table`.
-Pending jobs are added to a `pending queue` subspace. When they are
-accepted by a jobs processor, the jobs are removed from the pending queue and added
-to the `active jobs` subspace.
-
-Job states referenced in the API section are essentially defined based on the
-presence in any of these subspaces:
-
- * If a job is in the `pending queue` it is considered `pending`
- * If a job is in the `active jobs` subspace, then it is `running`
- * If a job is not `pending` or `running` then it is considered `finished`
-
-### Activity Monitor
-
-Job processors may suddenly crash and stop running their jobs. In that case the
-framework will automatically make those jobs `pending` after a timeout. That
-ensures the jobs continue to make progress. To avoid getting re-enqueued as
-`pending` due the timeout, each job processor must periodically call the
-`update/2,3` function. That functionality is implemented by the `activity
-monitor`. It periodically watches a per-type versionstamp-ed key, then scans
-`active jobs` subspace for any `running` jobs which haven't updated their
-entries during the timeout period.
-
-### Subscription Notifications
-
-Subscription notifications are managed separately for each job type. They use
-a per-type versionstamp-ed watch to monitor which jobs have updated since
-the last time it delivered notifications to the subscribers.
-
-### Data Model
-
- * `("couch_jobs", "data", Type, JobId) = (Sequence, JobLock, ScheduledTime, Resubmit, JobData)`
- * `("couch_jobs", "pending", Type, ScheduledTime, JobId) = ""`
- * `("couch_jobs", "watches_pending", Type) = Sequence`
- * `("couch_jobs", "watches_activity", Type) = Sequence`
- * `("couch_jobs", "activity_timeout", Type) = ActivityTimeout`
- * `("couch_jobs", "activity", Type, Sequence) = JobId`
-
-
-### Job Lifecycle Implementation
-
-This section describes how the framework implements some of the API functions.
-
- - `add/4,5` :
-   * Add the new job to the main jobs table.
-   * If a job with the same `JobId` exists, resubmit the job.
-   * Update `"pending"` watch for the type with a new versionstamp and bump its
-     counter.
-   * `JobLock` is set to `null`.
-
- - `remove/3` :
-   * Job is removed from the main jobs table.
-   * Job processor during the next `update/2,3` call will get a `halt` error
-     and know to stop running the job.
-
- - `accept/1,2` :
-   * Generate a unique `JobLock` UUID.
-   * Attempt to dequeue the item from the pending queue, then assign it the
-     `JobLock` in the jobs table.
-   * Create an entry in the `"activity"` subspace.
-   * If there are no pending jobs, get a watch for the `"pending"` queue and
-     wait until it fires, then try again.
-
- - `update/2,3`:
-   * If job is missing from the main jobs table return `halt`.
-   * Check if `JobLock` matches, otherwise return `halt`.
-   * Delete old `"activity"` sequence entry.
-   * Maybe update `JobData`.
-   * Create a new `"activity"` sequence entry and in main job table.
-   * Update `"watches"` sequence for that job type.
-
- - `finish/2,3`:
-   * If job is missing from the main jobs table return `halt`.
-   * Check if `JobLock` matches, otherwise returns `halt`.
-   * Delete old `"activity"` sequence entry.
-   * If `Resubmit` field is `true`, re-enqueue the job, and set `Resubmit` to `false`.
-   * Set job table's `JobLock` to `null`
-
- - `resubmit/2,3`:
-   * Set the `Resubmit` field to `true`.
-   * The job will be re-enqueued when `finish/2,3` is called.
-
-
-# Advantages and Disadvantages
-
-The main advantage is having a central way to coordinate batch processing
-across a cluster, with a single, unified API.
-
-
-## Possible Future Extensions
-
-Since all job keys and values are just FDB tuples and JSON encoded objects, in
-the future it might be possible to accept external jobs, not just jobs defined
-by the CouchDB internals. Also, since workers could be written in any language
-as long as they can talk to the FDB cluster, and follow the behavior describes
-in the design, it opens the possibility to have custom (user defined) workers
-of different types. But that is out of scope in the current RFC discussion.
-
-# Key Changes
-
- - New job execution framework
- - A single global job queue for each job type
- - An activity monitor to ensure jobs continue to make progress
-
-## Applications and Modules Affected
-
-Replication, indexing, couch-peruser
-
-## HTTP API Additions
-
-None. However, in the future, it might be useful to have an API to query and
-monitor the state of all the queues and workers.
-
-## HTTP API Deprecations
-
-None have been identified.
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list discussion](https://lists.apache.org/thread.html/9338bd50f39d7fdec68d7ab2441c055c166041bd84b403644f662735@%3Cdev.couchdb.apache.org%3E)
-
-# Co-authors
-  - @davisp
-
-# Acknowledgments
- - @davisp
- - @kocolosk
- - @garrensmith
- - @rnewson
- - @mikerhodes
- - @sansato
diff --git a/rfcs/008-map-indexes.md b/rfcs/008-map-indexes.md
deleted file mode 100644
index 991fa2b..0000000
--- a/rfcs/008-map-indexes.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# Map indexes RFC
-
----
-
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: ‘Map indexes on FoundationDB’
-labels: rfc, discussion
-assignees: ''
-
----
-
-## Introduction
-
-This document describes the data model and index management for building and querying map indexes.
-
-## Abstract
-
-Map indexes will have their data model stored in FoundationDB. Each index is grouped via its design doc's view signature. An index will store the index's key/values, size of the index and the last sequence number from the changes feed used to update the index.
-
-Indexes will be built using the background jobs api, `couch_jobs`, and will use the changes feed. There will be new size limitations on keys (10KB) and values (100KB) that are emitted from a map function.
-
-## Requirements Language
-
-[note]: # " Do not alter the section below. Follow its instructions. "
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`Sequence`: a 13-byte value formed by combining the current `Incarnation` of the database and the `Versionstamp` of the transaction. Sequences are monotonically increasing even when a database is relocated across FoundationDB clusters. See (RFC002)[LINK TBD] for a full explanation.
-
-`View Signature`: A md5 hash of the views, options, view language defined in a design document.
-`Interactive view`: A view updated in the same transaction that the document is added/updated to the database.
-
----
-
-## Detailed Description
-
-CouchDB views are used to create secondary indexes in a database. An index is defined by creating map/reduce functions in a design document. This document describes building the map indexes on top of FoundationDB (FDB).
-There are two ways to build a secondary index: via a background job or via in the same transaction that the document is added to the database. Building the index via the background job is the default way that a map index will be build. An example map function to do this is shown below:
-
-```json
-{
-  "_id": "_design/design-doc-id",
-  "_rev": "1-8d361a23b4cb8e213f0868ea3d2742c2",
-  "views": {
-    "map-view": {
-      "map": "function (doc) {\n  emit(doc._id, 1);\n}"
-    }
-  },
-  "language": "javascript"
-}
-```
-
-Adding `interactive: true` to the option field of an index will configure the index to be updated in the same transaction that the document is added to the database. This functionality has primarily been added to support Mango indexes but can work with map indexes. An example of a map index configured is shown below:
-
-```json
-{
-  "_id": "_design/design-doc-id",
-  "_rev": "1-8d361a23b4cb8e213f0868ea3d2742c2",
-  "views": {
-    "map-view": {
-      "map": "function (doc) {\n  emit(doc._id, 1);\n}"
-    }
-  },
-  "language": "javascript",
-  "options": [{ "interactive": true }]
-}
-```
-
-Interactive views have two step process to being built. When an index is added to the database, a background job is created for the index to be built up to the change sequence, creation versionstamp, that the index was added at. Any new documents added after the index was added will be indexed in the transaction that the document is added to the database. If a query for an interative view is received before the background job is complete, CouchDB will wait until the background job is com [...]
-
-### Data model
-
-The data model for a map indexed is:
-
-```
-% View build sequence - The change sequence that the index has been updated to.
-(<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_UPDATE_SEQ, <view_signature>) = Sequence
-
-% Interactive View Creation Versionstamp
-(<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_CREATION_VS, <signature>) = Versionstamp
-% Interactive View Build Status
-(<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_BUILD_STATUS, <signature>) = INDEX_BUILDING | INDEX_READY
-
-% Number of rows in the index
-{<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_ROW_COUNT, ?VIEW_ID_INFO, <view_id>, <view_signature> } = <row_count>
-% Key/Value size of index
-{<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_KV_SIZE, <view_signature>, <view_id>} = <kv_size>
-
-% Id index, used to track record what keys are in the index for each document
-(<database>, ?DB_VIEWS, ?VIEW_DATA, <view_signature>, ?VIEW_ID_RANGE, <_id>, <view_id>) = [total_keys, total_size, unique_keys]
-% The key/values for the index
-(<database>, ?DB_VIEWS, ?VIEW_DATA, <view_signature>, ?VIEW_MAP_RANGE, <view_id>, {<key>, <_id>}, <dupe_id>) = {<emitted_key>, <emitted_value>}
-```
-
-Each field is defined as:
-
-- `database` is the specific database namespace
-- `?DB_VIEWS` is the views namespace.
-- `<view_signature>` is the design documents `View Signature`
-- `?VIEW_INFO` is the view information namespace
-- `?VIEW_UPDATE_SEQ` is the change sequence namespace
-- `?VIEW_ID_RANGE` is the map id index namespace
-- `?VIEW_MAP_RANGE` is the map namespace
-- `_id` is the document id
-- `view_id` id of a view defined in the design document
-- `key` is the encoded emitted row key from a map function
-- `count` is a value that is incremented to allow duplicate keys to be emitted for a document
-- `emitted_key` is the emitted key from the map function
-- `emitted_value` is the emitted value from the map function
-- `row_count` number of rows in the index
-- `kv_size` size of the index
-- `total_keys` is the number of keys emitted by a document
-- `total_size` is the size of the key/values emitted by the document
-- `unique_keys` is the unique keys emitted by the document
-- `dupe_id` the duplication id to allow mutiple documents to emit a key/value
-
-The process flow for a document to be indexed in the background is as follows:
-
-1. FDB Transaction is started
-1. Read the document from the changes read (The number of documents to read at one type is configurable, the default is 100)
-1. The document is passed to the javascript query server and run through all the map functions defined in the design document
-1. The view's sequence number is updated to the sequence the document is in the changes feed.
-1. If the document was deleted and was previously in the view, the previous keys for the document are read from `?VIEW_ID_RANGE` and then cleared from the `?VIEW_MAP_RANGE`. The Row count and size count are also decreased.
-1. If the document is being updated and was previously added to the index, then he previous keys for the document are read from `?VIEW_ID_RANGE` and then cleared from the `?VIEW_MAP_RANGE` and then the index is updated with the latest emitted keys and value.
-1. The emitted keys are stored in the `?VIEW_ID_RANGE`
-1. The emitted keys are encoded then added to the `?VIEW_MAP_RANGE` with the emitted keys and value stored
-1. The `?VIEW_ROW_COUNT` is incremented
-1. The `?VIEW_KV_SIZE` is increased
-
-### Emitted Keys and Values limites
-
-If we have a design document like the following:
-
-```js
-{
-  "_id": "_design/design-doc-id",
-  "_rev": "1-8d361a23b4cb8e213f0868ea3d2742c2",
-  "views": {
-    "map-view": {
-      "map": "function (doc) {
-          emit(doc._id, doc.location);
-          emit([doc._id, doc.value], doc.name);
-        }",
-    }
-  },
-  "language": "javascript",
-  "options": [{"interactive":  true}]
-}
-```
-
-Each emit would be a new key/value row in the map index. Each key row cannot exceed 8KB and and each value row cannot exceed 64KB.
-If a document is emitted as a value, that document is not allowed to exceeed 64KB.
-
-### Key ordering
-
-FoundationDB orders key by byte value which is not how CouchDB orders keys. To maintain CouchDB's view collation, a type value will need to be prepended to each key so that the correct sort order of null < boolean < numbers < strings < arrays < objects is maintained.
-
-In CouchDB 2.x, strings are compared via ICU. The way to do this with FoundationDB is that for every string an ICU sort string will be generated upfront and used for index ordering instead of the original string.
-
-### Index building
-
-An index will be built and updated via a [background job worker](https://github.com/apache/couchdb-documentation/blob/master/rfcs/007-background-jobs.md). When a request for a view is received, the request process will add a job item onto the background queue for the index to be updated. A worker will take the item off the queue and update the index. Once the index has been built, the background job server will notify the request that the index is up to date. The request process will the [...]
-
-Initially, the building of an index will be a single worker running through the changes feed and creating the index. In the future, we plan to parallelise that work so that multiple workers could build the index at the same time. This will reduce build times.
-
-### View clean up
-
-When a design document is changed, new indexes will be built and grouped under a new `View Signature`. The old map indexes will still be in FDB. To clean up will be supported via the existing [/db/\_view_cleanup](https://docs.couchdb.org/en/latest/api/database/compact.html#db-view-cleanup) endpoint.
-
-A future optimisation would be to automate this and have CouchDB to monitor design doc changes and then look to clean up old view indexes via a background worker.
-
-### Stale = “ok” and stable = true
-
-With the consistency guarantee’s CouchDB will get from FDB, `stable = true` will no longer be an option that CouchDB would support and so the argument would be ignored. Similar `stale = “ok”` would now be translated to `update = false`.
-
-### Size limits
-
-- The sum of all keys emitted for a document cannot exceed 64 KB
-- Emitted keys will not be able to exceed 8 KB
-- Values cannot exceed 64 KB
-- There could be rare cases where the number of key-value pairs emitted for a map function could lead to a transaction either exceeding 10 MB in size which isn’t allowed or exceeding 5 MB which impacts the performance of the cluster. In this situation, CouchDB will send an error.
-
-These limits are the hard limits imposed by FoundationDB. We will have to set the user imposed limits to lower than that as we store more information than just the user keys and values.
-
-## Advantages
-
-- Map indexes will work on FoundationDB with the same behaviour as current CouchDB 1.x
-- Options like stale = “ok” and ‘stable = true’ will no longer be needed
-
-## Disadvantages
-
-- Size limits on key and values
-
-## Key Changes
-
-- Indexes are stored in FoundationDB
-- Indexes will be built via the background job queue
-- ICU sort strings will be generated ahead of time for each key that is a string
-
-## Applications and Modules affected
-
-- couch_mrview will be removed and replaced with a new couch_views OTP application
-
-## HTTP API additions
-
-The API will remain the same.
-
-## HTTP API deprecations
-
-- `stable = true` is no longer supported
-- `stale = "ok"` is now converted to `update = false`
-- reduce functions are not supported in this RFC
-
-## Security Considerations
-
-None have been identified.
-
-## Future improvements
-
-Two future improvements we could look to do that builds upon this work:
-
-- Better error handling for user functions. Currently, if a document fails when run through the map function, a user has to read the logs to discover that. We could look at adding an error-index and a new API endpoint.
-- Parallel building of the index. In this RFC, the index is only built sequentially by one index worker. In the future, it would be nice to split that work up and parallelize the building of the index.
-
-## References
-
-- TBD link to background tasks RFC
-- [Original mailing list discussion](https://lists.apache.org/thread.html/5cb6e1dbe9d179869576b6b2b67bca8d86b30583bced9924d0bbe122@%3Cdev.couchdb.apache.org%3E)
-
-## Acknowledgements
-
-Thanks to everyone that participated in the mailing list discussion
-
-- @janl
-- @kocolosk
-- @willholley
-- @mikerhodes
diff --git a/rfcs/009-exunit.md b/rfcs/009-exunit.md
deleted file mode 100644
index d1a76be..0000000
--- a/rfcs/009-exunit.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Use ExUnit testing framework for unit testing'
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-With the upgrade of supported Erlang version and introduction of Elixir into our
-integration test suite we have an opportunity to replace currently used eunit
-(for new tests only) with Elixir based ExUnit. 
-
-## Abstract
-
-Eunit testing framework has a number of issues which makes it very hard to use.
-We already use alternative testing framework called ExUnit for integration tests.
-The proposal is to extend the use of ExUnit to CouchDB unit tests as well.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their definitions here.)
-
----
-
-# Detailed Description
-
-The eunit testing framework is very hard to maintain. In particular, it has the
-following problems:
-- the process structure is designed in such a way that failure in setup or teardown
-  of one test affects the execution environment of subsequent tests. Which makes it
-  really hard to locate the place where the problem is coming from.
-- inline test in the same module as the functions it tests might be skipped
-- incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass
-- there is a weird (and hard to debug) interaction when used in combination with meck
-   - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
-   - https://github.com/eproxus/meck/issues/61
-   - meck:unload() must be used instead of meck:unload(Module)
-- teardown is not always run, which affects all subsequent tests
-- grouping of tests is tricky
-- it is hard to group tests so individual tests have meaningful descriptions
-- eunit implementation of `{with, Tests}` doesn't detect test name correctly
-- it is hard to skip certain tests when needed
-
-ExUnit shouldn't have these problems:
-- on_exit function is reliable in ExUnit
-- it is easy to group tests using `describe` directive
-- code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)
-
-# Advantages and Disadvantages
-
-## Advantages
-
-- Modern testing framework
-- Easy codegeneration of tests from formal spec
-- Reliability of teardown functions
-- Increased productivity due to smart test scheduling (run only failing tests)
-- Unified style enforced by code linter
-- Possibly more contributions from Elixir community
-- We already use ExUnit for integration tests
-- Support for test tags which could help us to introduce schedule of tests ([see #1885](https://github.com/apache/couchdb/issues/1885)).
-  We could run tests in the optimal order: 
-    - recently modified
-    - couch_db API based
-    - fabric API based
-    - http API based
-    - performance tests
-    - property based tests
-
-## Dissadvantages
-
-- New language & tooling to learn
-- We make Elixir required dependency (currently it is somewhat optional)
-
-# Key Changes
-
-- move all eunit tests from `<app>/test/*.erl` into `<app>/test/eunit/*.erl`
-- add `make exunit` target to Makefile
-- move `.credo.exs` (linter configuration) into root of a project
-- create `<app>/test/exunit/` directory to hold new test suites
-- add different test helpers under `test/elixir/lib`
-- add `mix.exs` into root of the project
-
-## Applications and Modules affected
-
-There is a possibility that we would need to modify content of `test/elixir/lib` 
-to have similar experience in both integration and unit test framework.
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-Production code is not updated. Therefore there is no security risk.
-
-# References
-
-- [Discussion on mailing list](https://lists.apache.org/thread.html/f842ca637f7cb06b34af699a793cab0a534e65970172e8117bf0b228@%3Cdev.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-Thanks to everyone who participated on the mailing list discussion
-
-- @davisp
-- @wohali
-- @garrensmith
\ No newline at end of file
diff --git a/rfcs/011-opentracing.md b/rfcs/011-opentracing.md
deleted file mode 100644
index bf4a059..0000000
--- a/rfcs/011-opentracing.md
+++ /dev/null
@@ -1,236 +0,0 @@
----
-name: Opentracing support
-about: Adopt industry standard distributed tracing solution
-title: 'Opentracing support'
-labels: rfc, discussion
-assignees: ''
-
----
-
-Adopt an industry standard vendor-neutral APIs and instrumentation for distributed tracing.
-
-# Introduction
-
-Collecting profiling data is very tricky at the moment. 
-Developers have to run generic profiling tools which are not aware of CouchDB specifics. 
-This makes it hard to do the performance optimization work. We need a tool which would 
-allow us to get profiling data from specific points in the codebase. 
-This means code instrumentation. 
-
-## Abstract
-
-There is an https://opentracing.io/ project, which is a vendor-neutral API and instrumentation
-for distributed tracing. In Erlang it is implemented by one of the following libraries:
- - [otters](https://github.com/project-fifo/otters) extended and more performant version of `otter`
- - [opentracing-erlang](https://github.com/opentracing-contrib/opentracing-erlang) `otter` version donated to opentracing project.
- - [original otter](https://github.com/Bluehouse-Technology/otter)
- - [passage](https://github.com/sile/jaeger_passage)
- 
-The opentracing philosophy is founded on three pillars:
-- Low overhead: the tracing system should have a negligible performance impact on running services.
-- Application-level transparency: programmers should not need to be aware of the tracing system
-- Scalability
-
-The main addition is to include one of the above mentioned libraries and add instrumentation points into the codebase.
-In initial implementation, there would be a new span started on every HTTP request.
-The following HTTP headers would be used to link tracing span with application specific traces.
-- X-B3-ParentSpanId
-- X-B3-TraceId
-- b3
-
-More information about the use of these headers can be found [here](https://github.com/openzipkin/b3-propagation).
-Open tracing [specification](https://github.com/opentracing/specification/blob/master/specification.md) 
-has a number of [conventions](https://github.com/opentracing/specification/blob/master/semantic_conventions.md) 
-which would be good to follow.
-
-In a nutshell the idea is:
-- Take the reference to Parent span from one of the supported header and pass it to `span_start` call.
-- Construct action name to use in `span_start` call.
-- Call `span_start` from `chttpd:handle_request_int/1`.
-- Pass span in `#httpd{}` record
-- Pass `trace_id` and `parent_span_id` through the stack (extend records if needed)
-- Attach span tags to better identify trace events.
-- Attach span logs at important instrumentation points.
-- Forward spans to external service.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-- [span](https://github.com/opentracing/specification/blob/1.1/specification.md#the-opentracing-data-model): The "span"
-  is the primary building block of a distributed trace, representing an individual unit of work done in a distributed system.
-  Each Span encapsulates the following state:
-   - An operation name
-   - A start timestamp
-   - A finish timestamp
-   - A set of zero or more key:value `Span Tags`. 
-   - A set of zero or more structured logs (key:value `Span Logs`).
-   - A `SpanContext`
-   - `References` to zero or more causally-related `Spans`
-
----
-
-# Detailed Description
-
-## Selection of a library
-
-As mentioned earlier, there are two flavours of libraries. None of them is perfect for all use cases.
-The biggest differences in between `otters` and `passage` are:
-
-|                                | otters      | passage                   |
-| ------------------------------ | ----------- | ------------------------- |
-| reporting protocol             | http        | udp                       |
-| filtering                      | custom DSL  | sampling callback module  |
-| reporter                       | zipkin only | jaeger or plugin          |
-| functional API                 |      +      |             +             |
-| process dictionary             |      +      |             +             |
-| process based span storage     |      +      |             -             |
-| send event in batches          |      +      |             -             |
-| sender overload detection      |      -      |             +             |
-| report batches based on        | timer       | spans of single operation |
-| design for performance         |      +      |             -             |
-| design for robustness at scale |      -      |             +             |
-| counters                       |      +      |             -             |
-| sampling based on duration     |      +      |             -             |
-| number of extra dependencies   |      1      |             3             |
-
-In order to allow future replacement of a tracing library it would be desirable to create an interface module `couch_trace`.
-The `otters` library would be used for the first iteration.
-
-## Configuration
-
-The `otters` library uses application environment to store its configuration. 
-It also has a facility to compile filtering DSL into a beam module.
-The filtering DSL looks like following: `<name>([<condition>]) -> <action>.`. 
-The safety of DSL compiler is unknown. Therefore a modification of tracing settings via configuration over HTTP wouldn't be possible.
-The otter related section of the config `tracing.filters` would be protected by BLACKLIST_CONFIG_SECTIONS.
-The configuration of tracing would only be allowed from remsh or modification of the ini file.
-The configuration for otter filters would be stored in couch_config as follows:
-```
-[tracing.filters]
-
-<name> = ([<condition>]) -> <action>.
-```
-
-## Tracing related HTTP headers
-
-Following headers on the request would be supported 
-- X-B3-ParentSpanId : 16 lower-hex characters
-- X-B3-TraceId      :  32 lower-hex characters
-- X-B3-SpanId       : 16 lower-hex characters
-- b3 : {TraceId}-{SpanId}-{SamplingState}-{ParentSpanId}
-  - the `SamplingState` would be ignored
-
-Following headers on the response would be supported 
-- X-B3-ParentSpanId : 16 lower-hex characters
-- X-B3-TraceId      :  32 lower-hex characters
-- X-B3-SpanId       : 16 lower-hex characters
-
-## Conventions
-
-The conventions bellow are based on [conventions from opentracing](https://github.com/opentracing/specification/blob/master/semantic_conventions.md#standard-span-tags-and-log-fields).
-All tags are optional since it is just a recomendation from open tracing to hint visualization and filtering tools.
-
-### Span tags
-
-| Span tag name    | Type    | Notes and examples                                  |
-| ---------------- | ------- | --------------------------------------------------- |
-| component        | string  | couchdb.<app> (e.g. couchdb.chttpd, couchdb.fabric) |
-| db.instance      | string  | for fdb-layer would be fdb connection string        |
-| db.type          | string  | for fdb-layer would be fdb                          |
-| error            | bool    | `true` if operation failed                          |
-| http.method      | string  | HTTP method of the request for the associated Span  |
-| http.status_code | integer | HTTP response status code for the associated Span   |
-| http.url         | string  | sanitized URL of the request in URI format          |
-| span.kind        | string  | Either `client` or `server` (RPC roles).            |
-| user             | string  | Authenticated user name                             |
-| db.name          | string  | Name of the accessed database                       |
-| db.shard         | string  | Name of the accessed shard                          |
-| nonce            | string  | Nonce used for the request                          |
- 
-
-### Log fields
-
-| Span log field name | Type    | Notes and examples                          |
-| ------------------- | ------- | ------------------------------------------- |
-| error.kind          | string  | The "kind" of an error (error, exit, throw) |
-| message             | string  | human-readable, one-line message            |
-| stack               | string  | A stack trace (\n between lines)            |
-
-## Multicomponent traces
-
-CouchDB has complex architecture. The request handling crosses layers' and components' boundaries.
-Every component or layer would start a new span. It *MUST* specify its parent span in order
-for visualization tools to work. The value of a TraceId *MUST* be included in every span start.
-The value of TraceId and SpanId *MAY* be passed to FDB when
-[foundationdb#2085](https://github.com/apple/foundationdb/issues/2085) is resolved.
-
-## Roadmap
-
-- initial implementation as described in this document
-- extend rexi to pass traceid and parentspanid
-- redo otter configuration
-- add tracing to server initiated jobs (compaction, replication)
-- rewrite `otters_conn_zipkin:send_buffer/0` to make it more robust
-- switch `otters_conn_zipkin` from `thrift` to `gRPC`
-
-
-# Advantages and Disadvantages
-
-## Drawbacks
-
-Specifically for `otters` library there are following concerns:
-- safety of configuration mechanism
-- the robustness of the zipkin sender
-
-## Advantages
-
-- Ability to forward tracing events to external system for further analysis
-- Low overhead
-- Structured logging for span logs
-- Link all events to same parent trace id
-
-# Key Changes
-
-- New configuration section
-- New dependencies
-- Additional HTTP headers
-- Additional fields in some records
-
-## Applications and Modules affected
-
-- chttpd
-- couch_trace (new module)
-
-## HTTP API additions
-
-Support for following headers would be added:
-- X-B3-ParentSpanId
-- X-B3-TraceId
-- b3
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-The security risk of injecting malicious payload into ini config is mitigated via placing the section into BLACKLIST_CONFIG_SECTIONS. 
-
-# References
-
-- [opentracing specification](https://github.com/opentracing/specification/blob/master/specification.md)
-- https://opentracing.io/
-- https://www.jaegertracing.io/docs/1.14/
-- https://zipkin.io
-- [opentracing conventions](https://github.com/opentracing/specification/blob/master/semantic_conventions.md) 
-
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
diff --git a/rfcs/012-fdb-reduce.md b/rfcs/012-fdb-reduce.md
deleted file mode 100644
index b8b01e4..0000000
--- a/rfcs/012-fdb-reduce.md
+++ /dev/null
@@ -1,1096 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Reduce indexes on FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-## Introduction
-
-This document describes 3 possible ways to support CouchDB's reduce functionality on top of FoundationDB.
-The main focus will be on a Skip List algorithm as it has the most potential to support all the required functionality.
-
-## Abstract
-
-Reduce indexes allow users of CouchDB to perform aggregations on a map index. These aggregations need to be stored in FoundationDB in a way that is efficient for updating the index on document updates and when retrieving results for different reduce group levels.
-Three options are initially listed, with a skip list approach selected as the most viable option. A process flow for building, retrieving and updating a skip list based reduce index is described. Finally, the data model for using this with FoundationDB is shown.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-No new terminology at this point.
-
----
-
-## Detailed Description
-
-Reduce indexes allow users of CouchDB to perform aggregations based on the key/values emitted from the map function.
-A lot of the power of the reduce functionality that CouchDB currently supports is because internally each map/reduce index is stored in a b+tree with non-leaf nodes containing aggregations of the results from their children. Allowing for efficient retrieval of those values. It is difficult to replicate that behavior exactly using FoundationDB. Therefore to implement reduce indexes in FoundationDB three possible ways are considered.
-
-### Option 1 - On the fly reduce calculation
-
-The simplest implementation is to perform the reduce function on the fly when a reduce query is requested.
-The map index will be read and the reduce aggregation performed on the keys as they are fetched from FoundationDB.
-This is by far the easiest to implement, but query performance will degrade as the database and index grows and could reach a point where the reduce query just stops working.
-
-### Option 2 - Precompute of all group-levels
-
-Another option is to precompute all group level's for a reduce function and store them as key/values in FoundationDB.
-This makes querying fast as the results are already calculated. The first difficulty comes when updating the index.
-For the built-in `_sum` and `_count` reduce functions, a single reduce calculation can be run and then applied to all group levels.
-For any custom reduce functions along with  `_stats` (`min` and `max` specifically) and `_approx_count_distinct`, the updating of each group level would be more complex as it requires reading all keys and running them through the reduce functions for all group levels before storing it back in FoundationDB.
-
-Another issue is any queries using startkey/endkey can be an expensive operation as we would have to perform aggregations on the startkey and the endkey key ranges. Then `group_level = 0` queries with a startkey or endkey would require a full aggregation of the keys in that range which again could be an expensive operation.
-
-### Option 3 - Skip list implementation
-
-The final option is using a skip list. A skip list can be thought of as a layered linked list. The bottom layer contains all the elements in an index. Each layer up from that is has a reduced (pun intended) number of elements in that list. See figure 1 for a simple skip list layout.
-
-![Figure 1: Example of a skip list](images/SkExample1.png)
-*Figure 1:* Example of a skip list
-
-A skip list would make it easier to query an index using startkey/endkey and more efficient than option 2 to update an index for all types of reduce functions.
-
-### Skip list implementation
-
-This section does a deep dive into how a skip list can be used to create, query and update a reduce index.
-To explore these situations, we will have the following design document defined.
-
-```js
-    {
-        "_id": "_design/reduce-example"
-        "views: {
-            "example": {
-                "map": function (doc) {
-                    emit([doc.year, doc.month, doc.day], 1);
-                },
-
-                "reduce": "_count"
-            }
-        }
-    }
-
-```
-
-And it emits the following key/value results for the reduce function:
-
-```js
-    [2017, 03, 1] = 1
-    [2017, 04, 1] = 1
-    [2017, 04, 1] = 1
-    [2017, 04, 15] = 1
-    [2017, 05, 1] = 1
-
-    [2018, 03, 1] = 1
-    [2018, 04, 1] = 1
-    [2018, 05, 1] = 1
-
-    [2019, 03, 1] = 1
-    [2018, 04, 1] = 1
-    [2018, 05, 1] = 1
-```
-
-#### Create
-
-To build the skip list, all keys will be added to level 0. When multiple of the same keys are emitted, the values are re-reduced before being added so level 0. Then each level up, a reduced number of keys will be added. For each level above 0, if a key/value is not added to that level, then that key's value is aggregated with the previous node in that row. Therefore each key/value node in a level is an aggregation of its key/value in level 0 and any key/values from the previous level tha [...]
-
-See figure 2 for an example of the listed keys added to a skip list.
-
-![figure 2: The emitted reduce keys added to a reduce skip list](images/SkExample2.png)
-*figure 2:* The emitted reduce keys added to a reduce skip list
-
-##### Skip levels and Level Distribution
-
-The number of skip list levels will be made configurable and the best level will be determined via performance testing.
-The algorithm to distribute keys across levels will be
-
-```js
-const MAX_LEVELS = 6;
-const LEVEL_FAN_POW = 4; // 2^X per level or (1 / 2^X) less than previous level
-
-const hashCalc = (key, level) => {
-    const keyHash = hashCode(JSON.stringify(key));
-    const out = (keyHash & ((1 << (level * LEVEL_FAN_POW)) - 1));
-    if (out !== 0) {
-        return false;
-    }
-
-    return true;
-}
-```
-
-The `hashCode` will hash the key to an integer. This allows for a consistent and predictable distribution across levels.
-The `LEVEL_FAN_POW` will also be configurable.h
-
-#### Query
-
-From the above figure 2, we can see that for a reduce query with `group = true` then level 0 will be used to return all the exact keys.
-And for a query with `group_level = 0` then the highest level can be used. 
-If a `group_level > 1` is set for a reduce query, we need to traverse the skip list and aggregate the results before returning the results to the user.
-
-For example, using figure 2, with a reduce query of `group_level = 2`. We start at Level 4, we would traverse down to level 3, compare Node 0 and Node [2018, 03, 1]. They are not the same key for `group_level = 2` so we need to move to a lower level. We would perform the same action of comparing the current node with the next node to see if they are the same key until we find matching nodes or we get to level 0. In this case, we would reach level 0 and return Node [2017, 03, 1]. At Node  [...]
-
-![figure 3: Traversing flow to return results for `group_level = 2`](images/SkExample3.png)
-*Figure 3:* Traversing flow to return results for `group_level = 2`
-
-A query with startkey/endkey would follow a similar process.  Start at the highest level traversing across until we exceed the startkey, then move down until we find the startkey or nearest node to it. Then follow the above process of traversing the skip list to return all results until we reach a node that is greater or equal to the endkey.
-
-#### Update
-
-To update the reduce index, we will use the same map id index that keeps track of what keys are associated with a document. When a document is deleted, the previous keys will be removed from level 0. If the reduce function is `_sum` or `_count`, an atomic update is then performed for all Node's above Level 0 that would have included the values for the keys deleted.  
-
-For reduce functions where we cannot perform an atomic update. The process for each level, above level 0, is to fetch all the key/values in the level below the current node that are used to calculate the current node's aggregation value. Re-reduce those keys to calculate the new value stored for the node and store those results back in FoundationDB.
-
-When updating a document, the initial delete process is followed to remove the existing keys that are no longer emitted for this document. The new keys are added at level 0. For levels above 0, the same distribution algorithm will be used to determine if the key/values are added to a level. If they are, then an aggregation of the nodes after this node but at the level below is performed to calculate the aggregation value stored for this node. The previous node's value is also recalculate [...]
-
-In the situation where multiple documents emit the same key, those keys are re-reduced before being added into fdb.
-
-### Data model
-
-The data model for the skip list implementation is below. The value will contain the reduce value, along with the unencoded key that would be returned for a query.
-
-```erlang
-{<database>, ?DB_VIEWS, Sig, ?VIEW_REDUCE_SK_RANGE, ViewId, SkipLevel, ReduceKey} = {UnEncodedKey, Value}
-
-SkipLevel = 0..?MAX_SKIP_LEVEL
-
-```
-
-Each field is defined as:
-
-- `<database>` is the specific database namespace
-- `?DB_VIEWS` is views namespace.
-- `Sig` is the design documents View Signature
-- `?VIEW_REDUCE_SK_RANGE` is the reduce namespace
-- `ViewId` id of a view defined in the design document
-- `SkipLevel` is the skip level the key/value is being stored for
-- `ReduceKey` is the encoded emitted keys
-- `RowType` indicates if the row is storing the emitted key or emitted value
-- `UnEncodedKey` the unencoded emitted keys
-- `Value` the reduce value for the emitted keys
-
-## FoundationDB Skip list implementation
-
-In Appendix A, is a full 
-
-## Advantages and Disadvantages
-
-- Skip Lists can be used for builtin reduces and custom reduces.
-
-## Disadvantages
-
-- Because the levels are randomly generated and values aggregated, there will be an increased number of traversals of lower levels compared to using a B+tree.
-
-## Key Changes
-
-Instead of using a B+tree to store reduce aggregations, CouchDB's reduce functionality will be built on top of FoundationDB using a skip list like algorithm.
-
-## Applications and Modules affected
-
-The module `couch_views` will be modified to support building and querying reduce indexes.
-
-## HTTP API additions
-
-There won't be any additions to the HTTP API.
-
-## HTTP API deprecations
-
-There are no HTTP API deprecations
-
-## Security Considerations
-
-None have been identified.
-
-## References
-
-[Wikipedia Skip List](https://en.wikipedia.org/wiki/Skip_list)
-[Skip lists done right](http://ticki.github.io/blog/skip-lists-done-right/)
-[FoundationDB Forum skip list suggestion](https://forums.foundationdb.org/t/couchdb-considering-rearchitecting-as-an-fdb-layer/1088/11)
-[Initial mailing list discussion](https://lists.apache.org/thread.html/011caa9244b3378e7e137ea7b0f726d8e6a17009df738a81636cb273@%3Cdev.couchdb.apache.org%3E)
-
-## Acknowledgements
-
-Thanks to
-  @rnewson
-  @alexmiller-apple
-  @kocolosk
-
-  for reviewing the RFC and mailing list discussion
-
-
-## Appendix A
-
-Below is a javascript implementation of a FoundationDB skip list implementation. It can also be found in a [github repo](https://github.com/garrensmith/fdb-skiplist-reduce) for quicker cloning and testing. This implementation makes some assumptions:
-
-1. All keys are arrays of [Year, Month, Day]
-2. Only implements startkey/endkey
-3. No delete was implemented
-4. This is a basic implementation to make sure we get creating/updating and traversal correct. It does not cover edge cases or much error handling
-
-Some results that I determined while running this:
-
-1. Time to insert a key stayed the same even as the skiplist grew
-1. For smaller reduce indexes (under a million rows), it was better to have a lower `LEVEL_FAN_POW`. Otherwise, the majority of keys remained on level 0 and level 1, so querying could not make much use of the higher levels. However, insertions are then slightly slower.
-
-```js
-/* To run locally
-    npm install foundationdb
-    node skiplist.js
-*/
-
-
-const assert = require('assert');
-const util = require('util');
-const fdb = require('foundationdb');
-const ks = require('foundationdb').keySelector;
-
-// CONSTANTS
-const SHOULD_LOG = false;
-const PREFIX = 'skiplist';
-const MAX_LEVELS = 6;
-const LEVEL_FAN_POW = 1; // 2^X per level or (1 / 2^X) less than previous level
-const END = 0xFF;
-
-fdb.setAPIVersion(600); // Must be called before database is opened
-const db = fdb.openSync()
-  .at(PREFIX) // database prefix for all operations
-  .withKeyEncoding(fdb.encoders.tuple)
-  .withValueEncoding(fdb.encoders.json); // automatically encode & decode values using JSON
-
-// Data model
-// (level, key) = reduce_value
-
-
-const log = (...args) => {
-    if (!SHOULD_LOG) {
-        return;
-    }
-    console.log(...args);
-}
-
-// keep a basic stats of which levels were used for a query
-let stats;
-const resetStats = () => {
-    stats = {
-        "0": [],
-        "1": [],
-        "2": [],
-        "3": [],
-        "4": [],
-        "5": [],
-        "6": [],
-    };
-}
-
-// An initial simple set of kvs to insert and query to verify the algoritym
-const kvs = [
-    [[2017,3,1], 9],
-    [[2017,4,1], 7], 
-    [[2019,3,1], 4], // out of order check
-    [[2017,4,15], 6],
-    [[2018,4,1], 3],  
-    [[2017,5,1], 9],
-    [[2018,3,1], 6],
-    [[2018,4,1], 4], // duplicate check
-    [[2018,5,1], 7],
-    [[2019,4,1], 6],
-    [[2019,5,1], 7]
-  ];
-
-// UTILS
-
-const getRandom = (min, max) => {
-    min = Math.ceil(min);
-    max = Math.floor(max);
-    return Math.floor(Math.random() * (max - min)) + min; //The maximum is exclusive and the minimum is inclusive
-  }
-
-const getRandomKey = (min, max) => {
-    return [getRandom(min, max), getRandom(1, 12), getRandom(1, 30)];
-}
-
-// Very rough hash algorithm to convert any string to an integer
-function hashCode(s) {
-    for(var i = 0, h = 0; i < s.length; i++)
-        h = Math.imul(31, h) + s.charCodeAt(i) | 0;
-    return h;
-}
-
-// calculation to determine if key should be added to a level
-const hashCalc = (key, level, pow) => {
-    const keyHash = hashCode(JSON.stringify(key));
-    const out = (keyHash & ((1 << (level * pow)) - 1));
-    if (out !== 0) {
-        return false;
-    }
-
-    return true;
-}
-
-// Basic rereduce function
-// _sum but pretend its more complex
-const rereduce = (values) => {
-    return values.reduce((acc, val) => {
-        return acc + val;
-    }, 0);
-};
-
-// Takes all key/values and collates to group level and runs rereduce
-const collateRereduce = (acc, groupLevel) => {
-    const acc1 = acc.reduce((acc, kv) => {
-        const key = getGroupLevelKey(kv.key, groupLevel);
-
-        if (!acc[key]) {
-            acc[key] = {
-                key,
-                values: []
-            };
-        }
-
-        acc[key].values.push(kv.value);
-        return acc;
-    }, {});
-
-    return Object.values(acc1).reduce((acc, kv) => {
-        const values = kv.values;
-        const key = kv.key;
-        const result = rereduce(values);
-
-        acc.push({
-            key,
-            value: result
-        });
-
-        return acc;
-    }, []);
-};
-
-// KEY UTIL FUNCTIONS
-
-// convert key to binary
-const keyToBinary = (one) => {
-    let keyOne = one.key ? one.key : one;
-
-    if (!Array.isArray(keyOne)) {
-        keyOne = [keyOne];
-    }
-
-
-    return Buffer.from(keyOne);
-}
-
-// check keys are equal
-const keysEqual = (one, two) => {
-    if (one === null || two === null) {
-        return false;
-    }
-
-    const binOne = keyToBinary(one);
-    const binTwo = keyToBinary(two);
-
-    return binOne.compare(binTwo) === 0;
-}
-
-// Are keys equal at set group level
-const groupLevelEqual = (one, two, groupLevel) => {
-    if (one === null || two === null) {
-        return false
-    }
-    const levelOne = getGroupLevelKey(one.key, groupLevel);
-    const levelTwo = getGroupLevelKey(two.key, groupLevel);
-
-    return keysEqual(levelOne, levelTwo);
-};
-
-// is key two greater than key one?
-const keyGreater = (one, two) => {
-    if (!one || !two) {
-        return false;
-    }
-
-    const binOne = keyToBinary(one);
-    const binTwo = keyToBinary(two);
-
-    // key two comes after
-    return binOne.compare(binTwo) === -1;
-}
-
-// convert key to group level. e.g Key = [2019,2,5] and group_level = 2
-// returns [2019, 2]
-const getGroupLevelKey = (key, groupLevel) => {
-    if (groupLevel === 0) {
-        return null
-    }
-
-    if (!Array.isArray(key)) {
-        return key;
-    }
-
-    if (key.length <= groupLevel) {
-        return key;
-    }
-
-    return key.slice(0, groupLevel);
-};
-
-// FDB OPERATIONS
-
-// clear full range
-const clear = async () => {
-    await db.doTransaction(async tn => {
-        tn.clearRangeStartsWith([]);
-    });
-}
-
-// get value for key at level
-const getVal = async (tn, key, level) => {
-    return  await tn.get([level, key]);
-}
-
-// add kv to level
-const insertAtLevel = async (tn, key, value, level) => {
-    log('inserting', level, key, ':', value);
-    return await tn.set([level, key], value);
-};
-
-// get all kvs within start/end, exclusive of end key
-const getRange = async (tn, start, end, level) => {
-    const kvs = await tn.getRangeAll([level, start], [level, end]);
-
-    return kvs.map(([[_level, key], value]) => {
-        return {
-            key,
-            value
-        };
-    });
-};
-
-// get all kvs within start/end, inclusive of end
-const getRangeInclusive = async (tn, start, end, level) => {
-    const kvs = await tn.getRangeAll(
-        ks.firstGreaterOrEqual([level, start]), 
-        ks.firstGreaterThan([level, end])
-        );
-
-    return kvs.map(([[_level, key], value]) => {
-        return {
-            key,
-            value
-        };
-    });
-}
-
-// return kv in common format
-const getKV = (item) => {
-    const [key, value] = item.value;
-    return {
-        key: key[1],
-        value: value
-    };
-}
-
-// Get key after supplied key
-const getNext = async (tn, key, level) => {
-    const iter = await tn.snapshot().getRange(
-        ks.firstGreaterThan([level, key]),
-        [level, END],
-        {limit: 1}
-    )
-
-    const item = await iter.next();
-    if (item.done) {
-        return {
-            key: END,
-            value: 0
-        };
-    }
-
-    const kv = getKV(item);
-    tn.addReadConflictKey([level, kv.key]);
-    return kv;
-};
-
-// Get key after supplied key but doesn't look further than endkey
-const getKeyAfter = async (tn, key, level, endkey) => {
-    const _endkey = endkey ? endkey : END;
-    const iter = await tn.getRange(
-        ks.firstGreaterThan([level, key]),
-        ks.firstGreaterThan([level, _endkey]),
-        {limit: 1}
-    )
-    
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// get kv before supplied key
-const getPrevious = async (tn, key, level) => {
-    const iter = await tn.snapshot().getRange(
-        ks.lastLessThan([level, key]),
-        ks.firstGreaterOrEqual([level, key]),
-        {limit: 1}
-    )
-
-    const item = await iter.next();
-    const kv = getKV(item);
-    tn.addReadConflictKey([level, kv.key]);
-    return kv;
-};
-
-// Get key at level or first one after key
-const getKeyOrNearest = async (tn, key, level, endkey) => {
-    const _endkey = endkey ? endkey : END;
-    const iter = await tn.getRange(
-        ks.firstGreaterOrEqual([level, key]),
-        ks.firstGreaterThan([level, _endkey]),
-        {limit: 1}
-    )
-    
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// Gets the final key in the set group level
-const getGroupLevelEndKey = async (tn, groupLevel, level, startkey) => {
-    const groupLevelKey = getGroupLevelKey(startkey, groupLevel);
-    const end = groupLevelKey === null ? END : [...groupLevelKey, END];
-    const iter = await tn.getRange(
-        ks.firstGreaterThan([level, groupLevelKey]),
-        ks.firstGreaterOrEqual([level, end]),
-        {reverse: true, limit: 1}
-    )
-    
-    //TODO: add a conflict key
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// Returns key for level or the first one before it
-const getKeyOrFirstBefore = async (tn, key, level) => {
-    const iter = await tn.getRange(
-        ks.lastLessThan([level, key]),
-        ks.firstGreaterThan([level, key]),
-        {limit: 1, reverse: true}
-    )
-    
-    //TODO: add a conflict key
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// SKIP LIST OPERATIONS
-
-//setup skip list and insert the initial kvs
-const create = async () => {
-    await db.doTransaction(async tn => {
-        for(let level = 0; level <= MAX_LEVELS; level++) {
-            await insertAtLevel(tn, '0', 0, level);
-        }
-    });
-
-    log('setup done');
-    for ([key, val] of kvs) {
-        await db.doTransaction(async tn => {
-            await insert(tn, key, val);
-        });
-    }
-};
-
-// inserts a larger amount of keys, 1000 keys per transaction
-const rawKeys = []
-const createLots = async () => {
-    const docsPerTx = 3000;
-    console.time('total insert');
-    for (let i = 0; i <= 30000; i+= docsPerTx) {
-        const kvs = [];
-        for (let k = 0; k <= docsPerTx; k++) {
-            const key = getRandomKey(2015, 2020);
-            const value = getRandom(1, 20);
-            rawKeys.push({key, value});
-            kvs.push([key, value]);
-        }
-        console.time('tx');
-        await db.doTransaction(async tn => {
-            for ([key, value] of kvs) {
-                await insert(tn, key, value);
-            }
-        });
-        console.timeEnd('tx');
-        log('inserted ${i} keys');
-    }
-    console.timeEnd('total insert');
-}
-
-/* The insert algorithm
- Works as follows:
- Level 0:
- * Always insert,
- * if key already exists at level 0, then rereduce two values and insert
- At level's > 0
- * Get previous kv at level
- * If hashCalc is true, key should be inserted at level
- * So need to recalculate previous keys value,
- * Get range from level below from previous key to current key
- * Rereduce those kvs and update previous key's value
- * Then get next key after current key at level
- * Use that to get range from current key to next key at level below
- * Rereduce those values to create value for current key
- 
- * If hashCalc is false, key is not inserted at level
- * So rereduce previous key's value with current key's value and update previous kv
-*/
-const insert = async (tn, key, value) => {
-    let currentVal = value; // if this k/v has been stored before we need to update this value at level 0 to be used through the other levels
-    for(let level = 0; level <= MAX_LEVELS; level++) {
-        if (level === 0) {
-            const existing = await getVal(tn, key, level);
-            if (existing) {
-                currentVal = rereduce([existing, currentVal]);
-            }
-            await insertAtLevel(tn, key, currentVal, 0);
-            continue;
-        }
-        const previous = await getPrevious(tn, key, level);
-        log('Planning to insert at ', level, 'key', key, 'previous is', previous);
-        if (hashCalc(key, level, LEVEL_FAN_POW)) {
-            const lowerLevel = level - 1;
-            // update previous node
-            const newPrevRange = await getRange(tn, previous.key, key, lowerLevel);
-            log('prevRange', newPrevRange, 'prevKey', previous, 'key', key);
-            const prevValues = newPrevRange.map(kv => kv.value);
-            const newPrevValue = rereduce(prevValues)
-            if (newPrevValue !== previous.value) {
-                await insertAtLevel(tn, previous.key, newPrevValue, level);
-            }
-
-            // calculate new nodes values
-            const next = await getNext(tn, key, level);
-            const newRange = await getRange(tn, key, next.key, lowerLevel);
-            const newValues = newRange.map(kv => kv.value);
-            const newValue = rereduce(newValues);
-            log('inserting at level', level, 'key', key, 'after', next, 'range', newRange);
-            await insertAtLevel(tn, key, newValue, level);
-        } else {
-            const newValue = rereduce([previous.value, value]);
-            log('rereduce at', level, 'key', previous.key, 'new value', newValue, 'prev value', previous.value);
-            await insertAtLevel(tn, previous.key, newValue, level);
-        }
-    }
-};
-
-// A simple print that will show all keys at set levels and verify that the values at each level
-// sum up to the values at level = 0
-const print = async () => {
-    let total = 0;
-    await db.doTransaction(async tn => {
-        for(let level = 0; level <= MAX_LEVELS; level++) {
-            let levelTotal = 0;
-            const levelResults = await tn.getRangeAll([level, "0"], [level, END]);
-            const keys = levelResults.map(([[_, key], val]) => {
-                const a = {};
-                a[key] = val;
-                if (level === 0) {
-                    total += val;
-                }
-
-                levelTotal += val;
-                return a;
-            });
-
-            log(`Level ${level}`, keys);
-            assert.equal(levelTotal, total, `Level ${level} - level total ${levelTotal} values not equal to level 0 ${total}`);
-        }
-    });
-
-    return {
-        total
-    };
-};
-
-
-// Determines which level and the range the skiplist traversal can do next
-/* Works as follows:
-    * Get the final key for a group level from level 0 - ideally this algorithm looks to scan at the highest level possible
-      and we need this grouplevel endkey to know how far we can possibly scan
-    * If group end key is greater than endkey, send key to groupEndkey
-    * `levelRanges` is used to keep an array of possible ranges we could use scan. Level 0 is always added
-    * At the for loop, start at level 0, and look at one level above and see if the startkey exists in that level
-    * If the startkey does, also find the group level endkey for that level, if the group level endkey is valid add to `levelranges`
-    * If the startkey is not in the level above, scan at the current level from the startkey to the nearest key in the level above
-        this way we do a small scan at a lower level and then at the next traversal can scan at a level up
-*/
-const getNextRangeAndLevel = async (tn, groupLevel, level, startkey, endkey) => {
-    let groupEndkey = await getGroupLevelEndKey(tn, groupLevel, 0, startkey.key);
-    log('groupendkey', groupEndkey, 'start', startkey, 'end', endkey, keyGreater(endkey, groupEndkey));
-    if (keyGreater(endkey, groupEndkey)) {
-        groupEndkey = endkey;
-    }
-
-    // at end of this specific grouplevel, so have to do final scan at level 0
-    if (keysEqual(startkey, groupEndkey)) {
-        return [0, startkey, startkey];
-    }
-
-    const levelRanges = [{
-        level: 0,
-        start: startkey,
-        end: groupEndkey
-    }];
-    for (let i = 0; i < MAX_LEVELS; i++) {
-        log('next start', startkey, 'i', i);
-        // look 1 level above
-        let nearestLevelKey = await getKeyOrNearest(tn, startkey.key, i + 1, endkey.key);
-        log('nearest', nearestLevelKey, "level", i + 1, "start", startkey, "grouplevelequal", groupLevelEqual(startkey, nearestLevelKey, groupLevel));
-
-        if (keysEqual(nearestLevelKey, startkey)) {
-            const groupLevelEndKey = await getGroupLevelEndKey(tn, groupLevel, i + 1, nearestLevelKey.key);
-            log('CALCUP1', 'nearest', nearestLevelKey, 'after', groupLevelEndKey, 'level', i);
-            if (groupLevelEndKey !== null) {
-                if (keyGreater(endkey, groupLevelEndKey)) {
-                    log('grouplevel great than endkey', endkey, groupLevelEndKey);
-                    // exceeded the range at this level we can't go further
-                    break;
-                }
-                // end of grouplevel for set level have to use previous levels for read
-                if (keysEqual(nearestLevelKey, groupLevelEndKey)) {
-                    break;
-                }
-
-                levelRanges.push({
-                    level: i + 1,
-                    start: nearestLevelKey,
-                    end: groupLevelEndKey
-                });
-                continue;
-            }
-        } else if (nearestLevelKey !== null && groupLevelEqual(startkey, nearestLevelKey, groupLevel)) {
-            log('querying to nearest level up', startkey, nearestLevelKey);
-            return [i, startkey, nearestLevelKey];
-        } 
-
-        break;
-    }
-
-    
-    log('gone to far', JSON.stringify(levelRanges, ' ', null));
-    const out = levelRanges.pop();
-    return [out.level, out.start, out.end]
-};
-
-// Main algorithm to traverse the skip list
-/* Algorithm works as follows:
-    * calls getNextRangeAndLevel to determine what to scan
-    * Gets all values in that range for set level including endkey
-    * Final value in range is used as the next startkey
-    * Collates and Rereduces all values collected
-    * If there is no new startkey or rangeEnd = endkey at we scanned at level 0 then we done
-    * Otherwise start again at level 0 and continue traversal
-*/
-const traverse = async (tn, level, prevLevel, current, endkey, groupLevel, acc) => {
-    if (level < 0) {
-        throw new Error("gone to low");
-    }
-    const [rangeLevel, rangeStart, rangeEnd] = await getNextRangeAndLevel(tn, groupLevel, level, current, endkey);
-    log('traversing, level', rangeLevel, 'start', rangeStart, 'end', rangeEnd);
-
-    // simple stats to keep track of which levels are used the most
-    stats[rangeLevel].push([rangeStart.key, rangeEnd.key]);
-    const results = await getRangeInclusive(tn, rangeStart.key, rangeEnd.key, rangeLevel);
-    log('RESULTS', results, 'start', rangeStart.key, 'end', rangeEnd.key);
-    // test with rangeEnd always next startkey
-    let nextStartKey = results[results.length - 1];
-    let keyAfterStart = await getKeyAfter(tn, nextStartKey.key, rangeLevel, endkey.key);
-    log('checking', nextStartKey, keyAfterStart, groupLevelEqual(nextStartKey, keyAfterStart, groupLevel));
-
-    const useableResults = results.slice(0, results.length -1);
-    acc = [...acc, ...useableResults];
-    if (rangeLevel === 0 && !groupLevelEqual(nextStartKey, keyAfterStart, groupLevel)) {
-        acc.push(nextStartKey);
-        log('collating and reducing', acc);
-        const reducedResults = collateRereduce(acc, groupLevel);
-        acc = reducedResults;
-        nextStartKey = await getKeyAfter(tn, nextStartKey.key, rangeLevel, endkey.key);
-        //should stream results for a common group at this point
-    }
-
-    // Reached the end of the query, return results
-    if ((keysEqual(rangeEnd, endkey) || nextStartKey === null) && rangeLevel === 0) {
-        return acc;
-    }
-
-    log('moving next traversal', rangeLevel, 'newStart', nextStartKey, acc);
-    return traverse(tn, 0, rangeLevel, nextStartKey, endkey, groupLevel, acc);
-}
-
-// simple formatter to mimic CouchDb response
-const formatResult = (results) => {
-    return {
-        rows: results
-    };
-};
-
-
-// query function to set correct startkey/endkey and call correct query algorithm
-const query = async (opts) => {
-    resetStats();
-    return await db.doTransaction(async tn => {
-        let endkey = {key: END, value: 0};
-        let startkey = {key: '0', value: 0};
-
-        if (opts.startkey) {
-            startkey = await getKeyOrNearest(tn, opts.startkey, 0);
-            if (!startkey) {
-                return false; //startkey out of range;
-            }
-            log('startkey', opts.startkey, startkey);
-        }
-
-        if (opts.endkey) {
-            endkey = await getKeyOrFirstBefore(tn, opts.endkey, 0);
-            log('endkey', opts.endkey, endkey);
-        }
-
-        if (opts.group) {
-            const results = await getRangeInclusive(tn, startkey.key, endkey.key, 0);
-            return formatResult(results);
-        }
-
-        if (opts.group_level === 0 && !opts.startkey && !opts.endkey) {
-                const results = await getRange(tn, '0', END, MAX_LEVELS);
-                if (results.length > 1) {
-                    const vals = results.map(kv => kv.value);
-                    const total = rereduce(vals);
-                    return formatResult([{
-                        key: null,
-                        value: total
-                    }]);
-                }
-
-                return formatResult([{
-                    key: null,
-                    value: results[0].value
-                }]);
-        }
-
-
-        const results = await traverse(tn, 0, 0, startkey, endkey, opts.group_level, []);
-        console.log('query stats', util.inspect(stats, {depth: null}));
-        return formatResult(results);
-    });
-};
-
-
-// smaller queries with the initial kvs added to the skiplist
-// this is used to varify the accuracy of the insert and query
-const simpleQueries = async () => {
-    let result = {};
-    result = await query({group_level: 0});
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 68
-        }]
-    });
-
-    result = await query({group_level:0, startkey: [2018, 3, 2]});
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 31
-        }]
-    });
-
-    result = await query({
-        group_level:0,
-        startkey: [2018, 3, 2],
-        endkey: [2019, 5, 1]
-    });
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 31
-        }]
-    });
-
-    result = await query({
-        group_level: 0,
-        startkey: [2018, 03, 2],
-        endkey: [2019, 03, 2],
-
-    })
-
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 18
-        }]
-    });
-
-    result = await query({
-        group_level: 1,
-        startkey: [2017, 4, 1],
-        endkey: [2018, 3, 1],
-    })
-
-    assert.deepEqual(result, {
-        rows: [
-        {
-            key: [2017],
-            value: 22
-        },
-        {
-            key: [2018],
-            value: 6
-        }
-    ]
-    });
-
-    result = await query({
-        group_level: 1,
-        startkey: [2017, 4, 1],
-        endkey: [2019, 03, 2],
-
-    })
-
-    assert.deepEqual(result, {
-        rows: [
-        {
-            key: [2017],
-            value: 22
-        },
-        {
-            key: [2018],
-            value: 20
-        },
-        {
-            key: [2019],
-            value: 4
-        }
-    ]
-    });
-
-    result = await query({
-        group_level: 1,
-        startkey: [2017, 4, 1],
-        endkey: [2019, 05, 1],
-
-    })
-
-    assert.deepEqual(result, {
-        rows: [
-        {
-            key: [2017],
-            value: 22
-        },
-        {
-            key: [2018],
-            value: 20
-        },
-        {
-            key: [2019],
-            value: 17
-        }
-    ]
-    });
-
-    result = await query({
-        group: true,
-        startkey: [2018, 5, 1],
-        endkey: [2019, 4, 1],
-    });
-
-    assert.deepEqual(result, {rows: [
-        {key: [2018,5,1], value: 7},
-        {key: [2019,3,1], value: 4},
-        {key: [2019,4,1], value: 6}
-    ]})
-    log('SIMPLE DONE');
-};
-
-// Fetch all level 0 kvs for a query and produce the correct result
-const queryLevel0 = async (opts) => {
-    return await db.doTransaction(async tn => {
-        let endkey = {key: END, value: 0};
-        let startkey = {key: '0', value: 0};
-
-        if (opts.startkey) {
-            startkey = await getKeyOrNearest(tn, opts.startkey, 0);
-        }
-
-        if (opts.endkey) {
-            endkey = await getKeyOrFirstBefore(tn, opts.endkey, 0);
-        }
-        const results = await getRangeInclusive(tn, startkey.key, endkey.key, 0);
-        const acc1 = collateRereduce(results, opts.group_level); 
-        return formatResult(acc1);
-    });
-}
-
-// Perform a full range scan on the skip list and compare the performance versus 
-// just reading from level 0
-const largeQueries = async () => {
-    let result;
-    const [startkey, endkey] = await db.doTransaction(async tn => {
-        const start = await getKeyAfter(tn, '0', 0);
-        const end = await getPrevious(tn, END, 0);
-
-        return [start.key, end.key];
-    });
-
-    for (let i = 0; i < 10; i++) {
-        const opts = {
-            group_level: 1,
-            startkey,
-            endkey
-        };
-        console.log('range', startkey, endkey);
-        console.time('query');
-        result = await query(opts);
-        console.timeEnd('query');
-
-        console.time('level0');
-        const level1Result = await queryLevel0(opts);
-        console.timeEnd('level0');
-        assert.deepEqual(result, level1Result);
-    }
-};
-
-
-// run function
-const run = async () => {
-    await clear();
-    await create();
-    await print();
-    await simpleQueries();
-    await createLots();
-    await print();
-    await largeQueries();
-};
-
-run();
-
-```
\ No newline at end of file
diff --git a/rfcs/013-node-types.md b/rfcs/013-node-types.md
deleted file mode 100644
index 9c811e2..0000000
--- a/rfcs/013-node-types.md
+++ /dev/null
@@ -1,143 +0,0 @@
----
-name: Node Types
-about: Introduce heterogeneous node types to CouchDB 4
-title: 'Node Types'
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-This RFC proposes the ability to have different node types in CouchDB 4+. This
-would improve performance and allow for a more efficient use of resources.
-
-## Abstract
-
-Previously, in CouchDB 2 and 3, cluster functionality was uniformly distributed
-amongst the nodes. Any node could accept HTTP requests, run replication jobs
-and build secondary indices. With the FDB-based topology, CRUD operations have
-lower resource needs and so it could be useful to have a heterogeneous
-topology, where for example, CRUD operations run on lower capacity nodes, and a
-few higher capacity nodes handle replication or indexing jobs.
-
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-*node type* : A label used to designate a subset of CouchDB functionality.
-
----
-
-# Detailed Description
-
-## Node Types
-
-A node type is a description of a some internal CouchDB functionality. These
-are the initially defined node types:
-
- * `api_frontend` : Indicates this node can accept HTTP API requests.
- * `view_indexing` : Indicates this node can build map/reduce view indices.
- * `search_indexing` : Indicates this node can build search indices.
- * `replication` : Indicates this node can run replication jobs.
-
-Users can configure CouchDB nodes with any combination of those types.
-
-## Configuration
-
-Configuration MAY be specified in the Erlang application or OS environment
-variables. OS environment variables have a higher precedence. By default, if
-the type is not configured in either one of those places, it defaults to
-`true`.
-
-### Erlang Application Environment Configuration
-
-Configuration MUST be specified for the `fabric` application, under the
-`node_types` key. The value MUST be proplist which looks like `[{$type, true |
-false}, ...]`. For example, the `va.args` file MAY be used like such:
-
-```
--fabric node_types '[{api_frontend, false}, {replication, true}]'
-
-```
-
-### OS Environment Configuration
-
-Node types MAY be set via environment variables using the `COUCHDB_NODE_TYPE_`
-prefix. The prefix SHOULD be followed by the type label. If the value of the
-variable is `false` the functionality indicated will be disabled on that
-node. Any other value, indicates `true`.
-
-Example:
-
-`COUCHDB_NODE_TYPE_API_FRONTEND=false COUCHDB_NODE_TYPE_VIEW_INDEXING=true ...`
-
-## Implementation
-
-Implementation should be minimally invasive, at least for the node types listed
-above.
-
- * `api_frontend` would enable the `chttpd` application, or its top level
-   supervisor.
-
- * All background tasks in FDB are executed via the `couch_jobs` framework. The
-top level application supervisors typically have a separate `gen_server` in
-charge of accepting jobs and executing them. The implementation then would be
-as simple as having a `case` statement around the worker's `start_link()`
-function.
-
-# Advantages and Disadvantages
-
-## Disadvantages
-
- - Increased configuration-state complexity
-
-## Advantages
-
- - Ability to utilize hardware resources better
- - Possibly better security by running indexing and replication jobs in an
-   isolated environment inaccessible from the outside
-
-# Key Changes
-
- - Heterogeneous node types
- - New configuration section
- - New configuration environment variables
-
-## Applications and Modules Affected
-
- - chttpd
- - fabric
- - couch_views
- - couch_jobs
- - couch_replicator
- - mango
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-N/A
-
-# References
-
-[1] https://github.com/apache/couchdb/issues/1338
-
-[2] https://github.com/apache/couchdb-documentation/blob/master/rfcs/007-background-jobs.md
-
-# Acknowledgments
-
-@kocolosk
-@mikerhodes
diff --git a/rfcs/015-background-index-building.md b/rfcs/015-background-index-building.md
deleted file mode 100644
index 3aa42f0..0000000
--- a/rfcs/015-background-index-building.md
+++ /dev/null
@@ -1,131 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Background index building'
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-This document describes the design for the background index builder in CouchDB 4.
-
-## Abstract
-
-Background index builder monitors databases for changes and then kicks off
-asynchronous index updates. It is also responsible for removing stale indexing
-data.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
----
-
-# Detailed Description
-
-The two main components of the background index builder are:
- 1) The notification mechanism
- 2) Index building behavior API and registration facility
-
-The notification mechanism monitors databases for updates and the secondary
-index applications register with the background indexer and provide an
-implementation of the index building API.
-
-## Database Updates Notifications
-
-After each document update transaction finishes, the background indexer is
-notified via a callback. The indexer then bumps the timestamp for that database
-in a set of sharded ETS tables. Each sharded ETS table has an associated
-background process which periodically removes entries from there and calls the
-index building API functions for each registered indexing backend.
-
-In addition to buiding indices, the background index builder also cleanups up
-stale index data. This is index data left behind after design documents have
-been updated or deleted and the view signatures changed.
-
-Background index building and cleaning may be enabled or disabled with
-configuration options. There is also a configurable delay during which db
-updates would accumulate for each database. This is used to avoid re-scheduling
-`couch_jobs` too often.
-
-## Background Index Building Behavior
-
-Unlike CouchDB 3 (`ken`), the background index builder in CouchDB 4 doesn't
-have centralized knowledge of all the possible secondary indices. Instead, each
-secondary indexing application may register with the background index builder
-and provide a set of callbacks implementing background index building for their
-particular index types.
-
-
-Background index building behavior is a standard Erlang/OTP behavior defined
-as:
-
-```
--callback build_indices(Db :: map(), DDocs :: list(#doc{})) ->
-    [{ok, JobId::binary()} | {error, any()}].
-
--callback cleanup_indices(Db :: map(), DDocs :: list(#doc{})) ->
-    [ok | {error, any()}].
-```
-
-Each indexing application, may register with the index builder by using
-`fabric2_index:register(Module)` function. When it registers, it must provide
-an implementation of that behavior in that module.
-
- * `build_indices/2`: must inspect all the passed in design doc bodies and
-trigger asynchronous index updates for the all views that module is responsible
-for.
-
- *`cleanup_indices/2`: must clean up all the stale indexing data associated
-with all the views in the design docs passed in as an argument.
-
-# Advantages and Disadvantages
-
- * Main advantage is simplicity. Rely on node-local updates and the fact that
-   all indexing is currently backed by `couch_jobs` jobs, which handle global
-   locking and coordination.
-
- * Main disadvantage is also simplicity. There is no concept of priority to
-   allow users to build some indices before others.
-
-# Key Changes
-
-Configuration format has changed. Instead of configuring background index
-building in the `[ken]` section, it is now configured in the `[fabric]` config
-section. Otherwise there are no external API changes.
-
-## Applications and Modules affected
-
- * fabric2_index
- * fabric2_db
- * couch_views
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-None
-
-# References
-
-[fabric2_index](https://github.com/apache/couchdb/blob/prototype/fdb-layer/src/fabric/src/fabric2_index.erl)
-[ken](https://github.com/apache/couchdb/tree/master/src/ken)
-
-# Co-authors
-
- * @davisp
-
-# Acknowledgements
-
- * @davisp
diff --git a/rfcs/016-fdb-replicator.md b/rfcs/016-fdb-replicator.md
deleted file mode 100644
index a10ea6e..0000000
--- a/rfcs/016-fdb-replicator.md
+++ /dev/null
@@ -1,384 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Replicator Implementation On FDB'
-labels: rfc, discussion
-assignees: 'vatamane@apache.org'
-
----
-
-# Introduction
-
-This document describes the design of the replicator application for CouchDB
-4.x. The replicator will rely on `couch_jobs` for centralized scheduling and
-monitoring of replication jobs.
-
-## Abstract
-
-Replication jobs can be created from documents in `_replicator` databases, or
-by `POST`-ing requests to the HTTP `/_replicate` endpoint. Previously, in
-CouchDB <= 3.x, replication jobs were mapped to individual cluster nodes and a
-scheduler component would run up to `max_jobs` number of jobs at a time on each
-node. The new design proposes using `couch_jobs`, as described in the
-[Background Jobs
-RFC](https://github.com/apache/couchdb-documentation/blob/master/rfcs/007-background-jobs.md),
-to have a central, FDB-based queue of replication jobs. `couch_jobs`
-application will manage job scheduling and coordination. The new design also
-proposes using heterogeneous node types as defined in the [Node Types
-RFC](https://github.com/apache/couchdb-documentation/blob/master/rfcs/013-node-types.md)
-such that replication jobs will be created only on `api_frontend` nodes and run
-only on `replication` nodes.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`_replicator` databases : A database that is either named `_replicator` or ends
-with the `/_replicator` suffix.
-
-`transient` replications : Replication jobs created by `POST`-ing to the
-`/_replicate` endpoint.
-
-`persistent` replications : Replication jobs defined in document in a
-`_replicator` database.
-
-`continuous` replications : Replication jobs created with the `"continuous":
-true` parameter. These jobs will try to run continuously until the user removes
-them. They may be temporarily paused to allow other jobs to make progress.
-
-`one-shot` replications : Replication jobs which are not `continuous`. If the
-`"continuous":true` parameter is not specified, by default, replication jobs
-will be `one-shot`. These jobs will try to run until they reach the end of the
-changes feed, then stop.
-
-`api_frontend node` : Database node which has the `api_frontend` type set to
-`true` as described in
-[RFC](https://github.com/apache/couchdb-documentation/blob/master/rfcs/013-node-types.md).
-Replication jobs can be only be created on these nodes.
-
-`replication node` : Database node which has the `replication` type set to
-`true` as described in
-[RFC](https://github.com/apache/couchdb-documentation/blob/master/rfcs/013-node-types.md).
-Replication jobs can only be run on these nodes.
-
-`filtered` replications: Replications with a user-defined filter on the source
-endpoint to filter its changes feed.
-
-`replication_id` : An ID defined by replication jobs, which is a hash of
-replication parameters that affect the result of the replication. These may
-include source and target endpoint URLs, as well as a filter function specified
-in a design document on the source endpoint.
-
-`job_id` : A replication job ID derived from the database and document IDs for
-persistent replications, and from source, target endpoint, user name and some
-options for transient replications. Computing a `job_id`, unlike a
-`replication_id`, doesn't require making any network requests. A filtered
-replication with a given `job_id` during its lifetime may change its
-`replication_id` multiple times when filter contents changes on the source.
-
-`max_jobs` : Configuration parameter which specifies up to how many replication
-jobs to run on each `replication` node.
-
-`max_churn` : Configuration parameter which specifies a limit of how many new
-jobs to spawn during each rescheduling interval.
-
-`min_backoff_penalty` : Configuration parameter specifying the minimum (the
-base) penalty applied to jobs which crash repeatedly.
-
-`max_backoff_penalty` : Configuration parameter specifying the maximum penalty
-applied to jobs which crash repeatedly.
-
----
-
-# Detailed Description
-
-Replication job creation and scheduling works roughly as follows:
-
- 1) `Persistent` and `transient` jobs both start by creating or updating a
- `couch_jobs` record in a separate replication key-space on `api_frontend`
- nodes. Persistent jobs are driven by the `couch_epi` callback mechanism which
- notifies `couch_replicator` application when documents in `_replicator` DBs
- are updated, or when `_replicator` DBs are created and deleted. Transient jobs
- are created from the `_replicate` HTTP handler directly. Newly created jobs
- are in a `pending` state.
-
- 2) Each `replication` node spawns some acceptor processes which wait in
- `couch_jobs:accept/2` call for jobs. It will accept only jobs which are
- scheduled to run at a time less or equal to the current time.
-
- 3) After a job is accepted, its state is updated to `running`, and then, a
- gen_server process monitoring these replication jobs will spawn another
- acceptor. That happens until the `max_jobs` limit is reached.
-
- 4) The same monitoring gen_server will periodically check if there are any
- pending jobs in the queue and, if there are, spawn up to some `max_churn`
- number of new acceptors. These acceptors may start new jobs and, if they do,
- for each one of them, the oldest running job will be stopped and re-enqueued
- as `pending`. This in large follows the logic from the replication scheduler
- in CouchDB <= 3.x except that is uses `couch_jobs` as the central queuing and
- scheduling mechanism.
-
- 5) After the job is marked as `running`, it computes its `replication_id`,
- initializes an internal replication state record from job's data object, and
- starts replicating. Underneath this level the logic is identical to what's
- already happening in CouchDB <= 3.x and so it is not described further in this
- document.
-
- 6) As jobs run, they periodically checkpoint, and when they do that, they also
- recompute their `replication_id`. In the case of filtered replications the
- `replication_id` may change, and if so, that job is stopped and re-enqueued as
- `pending`. Also, during checkpointing the job's data value is updated with
- stats such that the job stays active and doesn't get re-enqueued by the
- `couch_jobs` activity monitor.
-
- 7) If the job crashes, it will reschedule itself in `gen_server:terminate/2`
- via `couch_jobs:resubmit/3` call to run again at some future time, defined
- roughly as `now + max(min_backoff_penalty * 2^consecutive_errors,
- max_backoff_penalty)`. If a job starts and successfully runs for some
- predefined period of time without crashing, it is considered to be `"healed"`
- and its `consecutive_errors` count is reset to 0.
-
- 8) If the node where replication job runs crashes, or the job is manually
- killed via `exit(Pid, kill)`, `couch_jobs` activity monitor will automatically
- re-enqueue the job as `pending`.
-
-## Replicator Job States
-
-### Description
-
-The set of replication job states is defined as:
-
- * `pending` : A job is marked as `pending` in these cases:
-    - As soon as a job is created from an `api_frontend` node
-    - When it stopped to let other replication jobs run
-    - When a filtered replication's `replication_id` changes
-
- * `running` : Set when a job is accepted by the `couch_jobs:accept/2`
-   call. This generally means the job is actually running on a node,
-   however, in cases when a node crashes, the job may show as
-   `running` on that node until `couch_jobs` activity monitor
-   re-enqueues the job, and it starts running on another node.
-
- * `crashing` : The job was running, but then crashed with an intermittent
-   error. Job's data has an error count which is incremented, and then a
-   backoff penalty is computed and the job is rescheduled to try again at some
-   point in the future.
-
- * `completed` : One-Shot replications which have completed
-
- * `failed` : This can happen when:
-    - A replication job could not be parsed from a replication document. For
-      example, if the user has not specified a `"source"` field.
-    - A transient replication job crashes. Transient jobs don't get rescheduled
-      to run again after they crash.
-    - There already is another persistent replication job running or pending
-      with the same `replication_id`.
-
-### State Differences From CouchDB <= 3.x
-
-The set of states is slightly different than the ones from before. There are
-now fewer states as some of them have been combined together:
-
- * `initializing` was combined with `pending`
-
- * `error` was combined with `crashing`
-
-### Mapping Between couch_jobs States and Replication States
-
-`couch_jobs` application has its own set of state definitions and they map to
-replicator states like so:
-
- | Replicator States| `couch_jobs` States
- | ---              | :--
- | pending          | pending
- | running          | running
- | crashing         | pending
- | completed        | finished
- | failed           | finished
-
-### State Transition Diagram
-
-Jobs start in the `pending` state, after either a `_replicator` db doc
-update, or a POST to the `/_replicate` endpoint. Continuous jobs, will
-normally toggle between `pending` and `running` states. One-Shot jobs
-may toggle between `pending` and running a few times and then end up
-in `completed`.
-
-```
-_replicator doc       +-------+
-POST /_replicate ---->+pending|
-                      +-------+
-                          ^
-                          |
-                          |
-                          v
-                      +---+---+      +--------+
-            +---------+running+<---->|crashing|
-            |         +---+---+      +--------+
-            |             |
-            |             |
-            v             v
-        +------+     +---------+
-        |failed|     |completed|
-        +------+     +---------+
-```
-
-
-## Replication ID Collisions
-
-Multiple replication jobs may specify replications which map to the same
-`replication_id`. To handle these collisions there is an FDB subspace `(...,
-LayerPrefix, ?REPLICATION_IDS, replication_id) -> job_id` to keep track of
-them. After the `replication_id` is computed, each replication job checks if
-there is already another job pending or running with the same `replication_id`.
-If the other job is transient, then the current job will reschedule itself as
-`crashing`. If the other job is persistent, the current job will fail
-permanently as `failed`.
-
-## Replication Parameter Validation
-
-`_replicator` documents in CouchDB <= 3.x were parsed and validated in a
-two-step process:
-
-  1) In a validate-doc-update (VDU) javascript function from a programmatically
-  inserted _design document. This validation happened when the document was
-  updated, and performed some rough checks on field names and value types. If
-  this validation failed, the document update operation was rejected.
-
-  2) Inside replicator's Erlang code when it was translated to an internal
- record used by the replication application. This validation was more thorough
- but didn't have very friendly error messages. If validation failed here, the
- job would be marked as `failed`.
-
-For CouchDB 4.x the proposal is to use only the Erlang parser. It would be
-called from the `before_doc_update` callback. This is a callback which runs
-before every document update. If validation fails there it would reject the
-document update operation. This should reduce code duplication and also provide
-better feedback to the users directly when they update the `_replicator`
-documents.
-
-## Transient Job Behavior
-
-In CouchDB <= 3.x transient replication jobs ran in memory on a particular node
-in the cluster. If the node where the replication job ran crashes, the job
-would simply disappear without a trace. It was up to the user to periodically
-monitor the job status and re-create the job. In the current design,
-`transient` jobs are persisted to FDB as `couch_jobs` records, and so would
-survive node restarts. Also after transient jobs complete or failed,
-they used to disappear immediately. This design proposes keeping them around
-for a configurable emount of time to allow users to retrive their status via
-`_scheduler/jobs/$id` API.
-
-## Monitoring Endpoints
-
-`_active_tasks`, `_scheduler/jobs` and `_scheduler/docs` endpoint are handled
-by traversing the replication job's data using a new `couch_jobs:fold_jobs/4`
-API function to retrieve each job's data. `_active_tasks` implementation
-already works that way and `_scheduler/*` endpoint will work similarly.
-
-## Replication Documents Not Updated For Transient Errors
-
-Configuration
-[option](https://docs.couchdb.org/en/latest/replication/replicator.html?highlight=update_docs#compatibility-mode)
-`[replicator] update_docs = false` was introduced with the scheduling
-replicator in a 2.x release. It controls whether to update replication
-documents with transient states like `triggered` and `error`. It defaulted to
-`false` and was mainly for compatibility with older monitoring user scripts.
-That behavior now becomes hard-coded such that replication documents are only
-updated with terminal states of `failed` and `completed`. Users should use
-`_scheduler/docs` API to check for completion status instead.
-
-
-# Advantages and Disadvantages
-
-Advantages:
-
- * Simplicity: re-using `couch_jobs` means having a lot less code to maintain
-   in `couch_replicator`. In the draft implementation there are about 3000
-   lines of code saved compared to the replicator application in CouchDB 3.x
-
- * Simpler endpoint and monitoring implementation
-
- * Fewer replication job states to keep track of
-
- * Transient replications can survive node crashes and restarts
-
- * Simplified and improved validation logic
-
- * Using node types allows tightening firewall rules such that only
-   `replication` nodes are the ones which may make arbitrary requests outside
-   the cluster, and `frontend_api` nodes are the only ones that may accept
-   incoming connections.
-
-Disadvantages:
-
- * Behavior changes for transient jobs
-
- * Centralized job queue might mean handling some number of conflicts generated
-   in the FDB backend when jobs are accepted. These are mitigated using the
-   `startup_jitter` configuration parameter and a configurable number of max
-   acceptors per node.
-
- * In monitoring API responses, `running` job state might not immediately
-   reflect the running process state on the replication node. If the node
-   crashes, it might take up to a minute or two until the job is re-enqueued by
-   the `couch_jobs` activity monitor.
-
-# Key Changes
-
- * Behavior changes for transient jobs
-
- * A delay in `running` state as reflected in monitoring API responses
-
- * `[replicator] update_docs = false` configuration option becomes hard-coded
-
-## Applications and Modules affected
-
- * couch_jobs : New APIs to fold jobs and get pending count job estimate
-
- * fabric2_db : Adding EPI db create/delete callbacks
-
- * couch_replicator :
-    - Remove `couch_replicator_scheduler*` modules
-    - Remove `couch_replicator_doc_processor_*` modules
-    - `couch_replicator` : job creation and a general API entry-point for
-      couch_replicator.
-    - `couch_replicator_job` : runs each replication job
-    - `couch_replicator_job_server` : replication job monitoring gen_server
-    - `couch_replicator_parse` : parses replication document and HTTP
-      `_replicate` POST bodies
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-Ability to confine replication jobs to run on `replication` nodes improves the
-security posture. It is possible to set up firewall rules which allow egress
-traffic sent out only from those nodes.
-
-# References
-
-* [Background Jobs RFC](https://github.com/apache/couchdb-documentation/blob/master/rfcs/007-background-jobs.md)
-
-* [Node Types RFC](https://github.com/apache/couchdb-documentation/blob/master/rfcs/013-node-types.md)
-
-* [CouchDB 3.x replicator implementation](https://github.com/apache/couchdb/blob/3.x/src/couch_replicator/README.md)
-
-# Co-authors
-
- * @davisp
-
-# Acknowledgements
-
- * @davisp
diff --git a/rfcs/images/SkExample1.png b/rfcs/images/SkExample1.png
deleted file mode 100644
index cb3abb1..0000000
Binary files a/rfcs/images/SkExample1.png and /dev/null differ
diff --git a/rfcs/images/SkExample2.png b/rfcs/images/SkExample2.png
deleted file mode 100644
index 86d4506..0000000
Binary files a/rfcs/images/SkExample2.png and /dev/null differ
diff --git a/rfcs/images/SkExample3.png b/rfcs/images/SkExample3.png
deleted file mode 100644
index e47e216..0000000
Binary files a/rfcs/images/SkExample3.png and /dev/null differ
diff --git a/rfcs/template.md b/rfcs/template.md
deleted file mode 100644
index 08bd054..0000000
--- a/rfcs/template.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: ''
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-## Abstract
-
-[NOTE]: # ( Provide a 1-to-3 paragraph overview of the requested change. )
-[NOTE]: # ( Describe what problem you are solving, and the general approach. )
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their definitions here.)
-
----
-
-# Detailed Description
-
-[NOTE]: # ( Describe the solution being proposed in greater detail. )
-[NOTE]: # ( Assume your audience has knowledge of, but not necessarily familiarity )
-[NOTE]: # ( with, the CouchDB internals. Provide enough context so that the reader )
-[NOTE]: # ( can make an informed decision about the proposal. )
-
-[TIP]:  # ( Artwork may be attached to the submission and linked as necessary. )
-[TIP]:  # ( ASCII artwork can also be included in code blocks, if desired. )
-
-# Advantages and Disadvantages
-
-[NOTE]: # ( Briefly, list the benefits and drawbacks that would be realized should )
-[NOTE]: # ( the proposal be accepted for inclusion into Apache CouchDB. )
-
-# Key Changes
-
-[TIP]: # ( If the changes will affect how a user interacts with CouchDB, explain. )
-
-## Applications and Modules affected
-
-[NOTE]: # ( List the OTP applications or functional modules in CouchDB affected by the proposal. )
-
-## HTTP API additions
-
-[NOTE]: # ( Provide *exact* detail on each new API endpoint, including: )
-[NOTE]: # (   HTTP methods [HEAD, GET, PUT, POST, DELETE, etc.] )
-[NOTE]: # (   Synopsis of functionality )
-[NOTE]: # (   Headers and parameters accepted )
-[NOTE]: # (   JSON in [if a PUT or POST type] )
-[NOTE]: # (   JSON out )
-[NOTE]: # (   Valid status codes and their defintions )
-[NOTE]: # (   A proposed Request and Response block )
-
-## HTTP API deprecations
-
-[NOTE]: # ( Provide *exact* detail on the API endpoints to be deprecated. )
-[NOTE]: # ( If these endpoints are replaced by new endpoints, list those as well. )
-[NOTE]: # ( State the proposed version in which the deprecation and removal will occur. )
-
-# Security Considerations
-
-[NOTE]: # ( Include any impact to the security of CouchDB here. )
-
-# References
-
-[TIP]:  # ( Include any references to CouchDB documentation, mailing list discussion, )
-[TIP]:  # ( external standards or other links here. )
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
diff --git a/src/about.rst b/src/about.rst
deleted file mode 100644
index a3bfc50..0000000
--- a/src/about.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _about:
-
-===========================
-About CouchDB Documentation
-===========================
-
-License
-=======
-
-.. literalinclude:: ../LICENSE
-    :language: none
-    :lines: 1-202
diff --git a/src/api/basics.rst b/src/api/basics.rst
deleted file mode 100644
index 60c76a5..0000000
--- a/src/api/basics.rst
+++ /dev/null
@@ -1,589 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/basics:
-
-==========
-API Basics
-==========
-
-The CouchDB API is the primary method of interfacing to a CouchDB instance.
-Requests are made using HTTP and requests are used to request information from
-the database, store new data, and perform views and formatting of the
-information stored within the documents.
-
-Requests to the API can be categorised by the different areas of the CouchDB
-system that you are accessing, and the HTTP method used to send the request.
-Different methods imply different operations, for example retrieval of
-information from the database is typically handled by the ``GET`` operation,
-while updates are handled by either a ``POST`` or ``PUT`` request. There are
-some differences between the information that must be supplied for the
-different methods. For a guide to the basic HTTP methods and request structure,
-see :ref:`api/format`.
-
-For nearly all operations, the submitted data, and the returned data structure,
-is defined within a JavaScript Object Notation (JSON) object. Basic information
-on the content and data types for JSON are provided in :ref:`json`.
-
-Errors when accessing the CouchDB API are reported using standard HTTP Status
-Codes. A guide to the generic codes returned by CouchDB are provided in
-:ref:`errors`.
-
-When accessing specific areas of the CouchDB API, specific information and
-examples on the HTTP methods and request, JSON structures, and error codes are
-provided.
-
-.. _api/format:
-
-Request Format and Responses
-============================
-
-CouchDB supports the following HTTP request methods:
-
-- ``GET``
-
-  Request the specified item. As with normal HTTP requests, the format of the
-  URL defines what is returned. With CouchDB this can include static items,
-  database documents, and configuration and statistical information. In most
-  cases the information is returned in the form of a JSON document.
-
-- ``HEAD``
-
-  The ``HEAD`` method is used to get the HTTP header of a ``GET`` request
-  without the body of the response.
-
-- ``POST``
-
-  Upload data. Within CouchDB ``POST`` is used to set values, including
-  uploading documents, setting document values, and starting certain
-  administration commands.
-
-- ``PUT``
-
-  Used to put a specified resource. In CouchDB ``PUT`` is used to create new
-  objects, including databases, documents, views and design documents.
-
-- ``DELETE``
-
-  Deletes the specified resource, including documents, views, and design
-  documents.
-
-- ``COPY``
-
-  A special method that can be used to copy documents and objects.
-
-If you use an unsupported HTTP request type with an URL that does not support
-the specified type then a ``405 - Method Not Allowed`` will be returned,
-listing the supported HTTP methods. For example:
-
-.. code-block:: javascript
-
-    {
-        "error":"method_not_allowed",
-        "reason":"Only GET,HEAD allowed"
-    }
-
-HTTP Headers
-============
-
-Because CouchDB uses HTTP for all communication, you need to ensure that the
-correct HTTP headers are supplied (and processed on retrieval) so that you get
-the right format and encoding. Different environments and clients will be more
-or less strict on the effect of these HTTP headers (especially when not
-present). Where possible you should be as specific as possible.
-
-Request Headers
----------------
-
-- ``Accept``
-
-  Specifies the list of accepted data types to be returned by the server (i.e.
-  that are accepted/understandable by the client). The format should be a list
-  of one or more MIME types, separated by colons.
-
-  For the majority of requests the definition should be for JSON data
-  (``application/json``). For attachments you can either specify the MIME type
-  explicitly, or use ``*/*`` to specify that all file types are supported. If
-  the ``Accept`` header is not supplied, then the ``*/*`` MIME type is assumed
-  (i.e. client accepts all formats).
-
-  The use of ``Accept`` in queries for CouchDB is not required, but is highly
-  recommended as it helps to ensure that the data returned can be processed by
-  the client.
-
-  If you specify a data type using the ``Accept`` header, CouchDB will honor
-  the specified type in the ``Content-type`` header field returned. For
-  example, if you explicitly request ``application/json`` in the ``Accept`` of
-  a request, the returned HTTP headers will use the value in the returned
-  ``Content-type`` field.
-
-  For example, when sending a request without an explicit ``Accept`` header, or
-  when specifying ``*/*``:
-
-  .. code-block:: http
-
-      GET /recipes HTTP/1.1
-      Host: couchdb:5984
-      Accept: */*
-
-  The returned headers are:
-
-  .. code-block:: http
-
-      HTTP/1.1 200 OK
-      Server: CouchDB (Erlang/OTP)
-      Date: Thu, 13 Jan 2011 13:39:34 GMT
-      Content-Type: text/plain;charset=utf-8
-      Content-Length: 227
-      Cache-Control: must-revalidate
-
-  .. Note::
-      The returned content type is ``text/plain`` even though the information
-      returned by the request is in JSON format.
-
-  Explicitly specifying the ``Accept`` header:
-
-  .. code-block:: http
-
-      GET /recipes HTTP/1.1
-      Host: couchdb:5984
-      Accept: application/json
-
-  The headers returned include the ``application/json`` content type:
-
-  .. code-block:: http
-
-      HTTP/1.1 200 OK
-      Server: CouchDB (Erlang/OTP)
-      Date: Thu, 13 Jan 2013 13:40:11 GMT
-      Content-Type: application/json
-      Content-Length: 227
-      Cache-Control: must-revalidate
-
-- ``Content-type``
-
-  Specifies the content type of the information being supplied within the
-  request. The specification uses MIME type specifications. For the majority of
-  requests this will be JSON (``application/json``). For some settings the MIME
-  type will be plain text. When uploading attachments it should be the
-  corresponding MIME type for the attachment or binary
-  (``application/octet-stream``).
-
-  The use of the ``Content-type`` on a request is highly recommended.
-
-Response Headers
-----------------
-
-Response headers are returned by the server when sending back content and
-include a number of different header fields, many of which are standard HTTP
-response header and have no significance to CouchDB operation. The list of
-response headers important to CouchDB are listed below.
-
-- ``Cache-control``
-
-  The cache control HTTP response header provides a suggestion for client
-  caching mechanisms on how to treat the returned information. CouchDB
-  typically returns the ``must-revalidate``, which indicates that the
-  information should be revalidated if possible. This is used to ensure that
-  the dynamic nature of the content is correctly updated.
-
-- ``Content-length``
-
-  The length (in bytes) of the returned content.
-
-- ``Content-type``
-
-  Specifies the MIME type of the returned data. For most request, the returned
-  MIME type is ``text/plain``. All text is encoded in Unicode (UTF-8), and this
-  is explicitly stated in the returned ``Content-type``, as
-  ``text/plain;charset=utf-8``.
-
-- ``Etag``
-
-  The ``Etag`` HTTP header field is used to show the revision for a document,
-  or a view.
-
-  ETags have been assigned to a map/reduce group (the collection of views in a
-  single design document). Any change to any of the indexes for those views
-  would generate a new ETag for all view URLs in a single design doc, even if
-  that specific view's results had not changed.
-
-  Each ``_view`` URL has its own ETag which only gets updated when changes are
-  made to the database that effect that index. If the index for that specific
-  view does not change, that view keeps the original ETag head (therefore
-  sending back ``304 - Not Modified`` more often).
-
-- ``Transfer-Encoding``
-
-  If the response uses an encoding, then it is specified in this header field.
-
-  ``Transfer-Encoding: chunked`` means that the response is sent in parts, a
-  method known as `chunked transfer encoding`_. This is used when CouchDB does
-  not know beforehand the size of the data it will send (for example,
-  the :ref:`changes feed <changes>`).
-
-.. _chunked transfer encoding:
-    https://en.wikipedia.org/wiki/Chunked_transfer_encoding
-
-.. _json:
-
-JSON Basics
-===========
-
-The majority of requests and responses to CouchDB use the JavaScript Object
-Notation (JSON) for formatting the content and structure of the data and
-responses.
-
-JSON is used because it is the simplest and easiest solution for working with
-data within a web browser, as JSON structures can be evaluated and used as
-JavaScript objects within the web browser environment. JSON also integrates
-with the server-side JavaScript used within CouchDB.
-
-JSON supports the same basic types as supported by JavaScript, these are:
-
-- Array - a list of values enclosed in square brackets. For example:
-
-  .. code-block:: javascript
-
-      ["one", "two", "three"]
-
-- Boolean - a ``true`` or ``false`` value. You can use these strings directly.
-  For example:
-
-  .. code-block:: javascript
-
-      { "value": true}
-
-- Number - an integer or floating-point number.
-
-- Object - a set of key/value pairs (i.e. an associative array, or hash). The
-  key must be a string, but the value can be any of the supported JSON values.
-  For example:
-
-  .. code-block:: javascript
-
-      {
-          "servings" : 4,
-          "subtitle" : "Easy to make in advance, and then cook when ready",
-          "cooktime" : 60,
-          "title" : "Chicken Coriander"
-      }
-
-  In CouchDB, the JSON object is used to represent a variety of structures,
-  including the main CouchDB document.
-
-- String - this should be enclosed by double-quotes and supports Unicode
-  characters and backslash escaping. For example:
-
-  .. code-block:: javascript
-
-      "A String"
-
-Parsing JSON into a JavaScript object is supported through the ``JSON.parse()``
-function in JavaScript, or through various libraries that will perform the
-parsing of the content into a JavaScript object for you. Libraries for parsing
-and generating JSON are available in many languages, including Perl, Python,
-Ruby, Erlang and others.
-
-.. warning::
-    Care should be taken to ensure that your JSON structures are valid,
-    invalid structures will cause CouchDB to return an HTTP status code of 500
-    (server error).
-
-.. _json/numbers:
-
-Number Handling
----------------
-
-Developers and users new to computer handling of numbers often encounter
-surprises when expecting that a number stored in JSON format does not
-necessarily return as the same number as compared character by character.
-
-Any numbers defined in JSON that contain a decimal point or exponent will be
-passed through the Erlang VM's idea of the "double" data type. Any numbers that
-are used in views will pass through the view server's idea of a number (the
-common JavaScript case means even integers pass through a double due to
-JavaScript's definition of a number).
-
-Consider this document that we write to CouchDB:
-
-.. code-block:: javascript
-
-    {
-        "_id":"30b3b38cdbd9e3a587de9b8122000cff",
-        "number": 1.1
-    }
-
-Now let’s read that document back from CouchDB:
-
-.. code-block:: javascript
-
-    {
-        "_id":"30b3b38cdbd9e3a587de9b8122000cff",
-        "_rev":"1-f065cee7c3fd93aa50f6c97acde93030",
-        "number":1.1000000000000000888
-    }
-
-What happens is CouchDB is changing the textual representation of the
-result of decoding what it was given into some numerical format. In most
-cases this is an `IEEE 754`_ double precision floating point number which
-is exactly what almost all other languages use as well.
-
-.. _IEEE 754: https://en.wikipedia.org/wiki/IEEE_754-2008
-
-What Erlang does a bit differently than other languages is that it does not
-attempt to pretty print the resulting output to use the shortest number of
-characters. For instance, this is why we have this relationship:
-
-.. code-block:: erlang
-
-    ejson:encode(ejson:decode(<<"1.1">>)).
-    <<"1.1000000000000000888">>
-
-What can be confusing here is that internally those two formats decode into the
-same IEEE-754 representation. And more importantly, it will decode into a
-fairly close representation when passed through all major parsers that we know
-about.
-
-While we've only been discussing cases where the textual representation
-changes, another important case is when an input value contains more precision
-than can actually represented in a double. (You could argue that this case is
-actually "losing" data if you don't accept that numbers are stored in doubles).
-
-Here's a log for a couple of the more common JSON libraries that happen to be
-on the author's machine:
-
-Ejson (CouchDB's current parser) at CouchDB sha 168a663b::
-
-    $ ./utils/run -i
-    Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:2:2] [rq:2]
-    [async-threads:4] [hipe] [kernel-poll:true]
-
-    Eshell V5.8.5  (abort with ^G)
-    1> ejson:encode(ejson:decode(<<"1.01234567890123456789012345678901234567890">>)).
-    <<"1.0123456789012346135">>
-    2> F = ejson:encode(ejson:decode(<<"1.01234567890123456789012345678901234567890">>)).
-    <<"1.0123456789012346135">>
-    3> ejson:encode(ejson:decode(F)).
-    <<"1.0123456789012346135">>
-
-Node::
-
-    $ node -v
-    v0.6.15
-    $ node
-    JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    '1.0123456789012346'
-    var f = JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    undefined
-    JSON.stringify(JSON.parse(f))
-    '1.0123456789012346'
-
-Python::
-
-    $ python
-    Python 2.7.2 (default, Jun 20 2012, 16:23:33)
-    [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
-    Type "help", "copyright", "credits" or "license" for more information.
-    import json
-    json.dumps(json.loads("1.01234567890123456789012345678901234567890"))
-    '1.0123456789012346'
-    f = json.dumps(json.loads("1.01234567890123456789012345678901234567890"))
-    json.dumps(json.loads(f))
-    '1.0123456789012346'
-
-Ruby::
-
-    $ irb --version
-    irb 0.9.5(05/04/13)
-    require 'JSON'
-    => true
-    JSON.dump(JSON.load("[1.01234567890123456789012345678901234567890]"))
-    => "[1.01234567890123]"
-    f = JSON.dump(JSON.load("[1.01234567890123456789012345678901234567890]"))
-    => "[1.01234567890123]"
-    JSON.dump(JSON.load(f))
-    => "[1.01234567890123]"
-
-.. note::
-    A small aside on Ruby, it requires a top level object or array, so I just
-    wrapped the value. Should be obvious it doesn't affect the result of
-    parsing the number though.
-
-Spidermonkey::
-
-    $ js -h 2>&1 | head -n 1
-    JavaScript-C 1.8.5 2011-03-31
-    $ js
-    js> JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    "1.0123456789012346"
-    js> var f = JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    js> JSON.stringify(JSON.parse(f))
-    "1.0123456789012346"
-
-As you can see they all pretty much behave the same except for Ruby actually
-does appear to be losing some precision over the other libraries.
-
-The astute observer will notice that ejson (the CouchDB JSON library) reported
-an extra three digits. While its tempting to think that this is due to some
-internal difference, its just a more specific case of the 1.1 input as
-described above.
-
-The important point to realize here is that a double can only hold a finite
-number of values. What we're doing here is generating a string that when passed
-through the "standard" floating point parsing algorithms (ie, ``strtod``) will
-result in the same bit pattern in memory as we started with. Or, slightly
-different, the bytes in a JSON serialized number are chosen such that they
-refer to a single specific value that a double can represent.
-
-The important point to understand is that we're mapping from one infinite set
-onto a finite set. An easy way to see this is by reflecting on this::
-
-    1.0 == 1.00 == 1.000 = 1.(infinite zeros)
-
-Obviously a computer can't hold infinite bytes so we have to decimate our
-infinitely sized set to a finite set that can be represented concisely.
-
-The game that other JSON libraries are playing is merely:
-
-"How few characters do I have to use to select this specific value for a
-double"
-
-And that game has lots and lots of subtle details that are difficult to
-duplicate in C without a significant amount of effort (it took Python over a
-year to get it sorted with their fancy build systems that automatically run on
-a number of different architectures).
-
-Hopefully we've shown that CouchDB is not doing anything "funky" by changing
-input. Its behaving the same as any other common JSON library does, its just
-not pretty printing its output.
-
-On the other hand, if you actually are in a position where an IEEE-754 double
-is not a satisfactory data type for your numbers, then the answer as has been
-stated is to not pass your numbers through this representation. In JSON this is
-accomplished by encoding them as a string or by using integer types (although
-integer types can still bite you if you use a platform that has a different
-integer representation than normal, ie, JavaScript).
-
-Further information can be found easily, including the
-`Floating Point Guide`_, and  `David Goldberg's Reference`_.
-
-.. _Floating Point Guide: http://floating-point-gui.de/
-.. _David Goldberg's Reference: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
-
-Also, if anyone is really interested in changing this behavior, we're all ears
-for contributions to `jiffy`_ (which is theoretically going to replace ejson
-when we get around to updating the build system). The places we've looked for
-inspiration are TCL and Python. If you know a decent implementation of this
-float printing algorithm give us a holler.
-
-.. _jiffy: https://github.com/davisp/jiffy
-
-.. _errors:
-
-HTTP Status Codes
-=================
-
-With the interface to CouchDB working through HTTP, error codes and statuses
-are reported using a combination of the HTTP status code number, and
-corresponding data in the body of the response data.
-
-A list of the error codes returned by CouchDB, and generic descriptions of the
-related errors are provided below. The meaning of different status codes for
-specific request types are provided in the corresponding API call reference.
-
-- ``200 - OK``
-
-  Request completed successfully.
-
-- ``201 - Created``
-
-  Document created successfully.
-
-- ``202 - Accepted``
-
-  Request has been accepted, but the corresponding operation may not have
-  completed. This is used for background operations, such as database
-  compaction.
-
-- ``304 - Not Modified``
-
-  The additional content requested has not been modified. This is used with the
-  ETag system to identify the version of information returned.
-
-- ``400 - Bad Request``
-
-  Bad request structure. The error can indicate an error with the request URL,
-  path or headers. Differences in the supplied MD5 hash and content also
-  trigger this error, as this may indicate message corruption.
-
-- ``401 - Unauthorized``
-
-  The item requested was not available using the supplied authorization, or
-  authorization was not supplied.
-
-- ``403 - Forbidden``
-
-  The requested item or operation is forbidden.
-
-- ``404 - Not Found``
-
-  The requested content could not be found. The content will include further
-  information, as a JSON object, if available. The structure will contain two
-  keys, ``error`` and ``reason``. For example:
-
-  .. code-block:: javascript
-
-      {"error":"not_found","reason":"no_db_file"}
-
-- ``405 - Method Not Allowed``
-
-  A request was made using an invalid HTTP request type for the URL requested.
-  For example, you have requested a ``PUT`` when a ``POST`` is required. Errors
-  of this type can also triggered by invalid URL strings.
-
-- ``406 - Not Acceptable``
-
-  The requested content type is not supported by the server.
-
-- ``409 - Conflict``
-
-  Request resulted in an update conflict.
-
-- ``412 - Precondition Failed``
-
-  The request headers from the client and the capabilities of the server do not
-  match.
-
-- ``413 - Request Entity Too Large``
-
-  A document exceeds the configured :config:option:`couchdb/max_document_size`
-  value or the entire request exceeds the
-  :config:option:`httpd/max_http_request_size` value.
-
-- ``415 - Unsupported Media Type``
-
-  The content types supported, and the content type of the information being
-  requested or submitted indicate that the content type is not supported.
-
-- ``416 - Requested Range Not Satisfiable``
-
-  The range specified in the request header cannot be satisfied by the server.
-
-- ``417 - Expectation Failed``
-
-  When sending documents in bulk, the bulk load operation failed.
-
-- ``500 - Internal Server Error``
-
-  The request was invalid, either because the supplied JSON was invalid, or
-  invalid information was supplied as part of the request.
diff --git a/src/api/database/bulk-api.rst b/src/api/database/bulk-api.rst
deleted file mode 100644
index 14e72ba..0000000
--- a/src/api/database/bulk-api.rst
+++ /dev/null
@@ -1,1009 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/all_docs:
-
-===================
-``/{db}/_all_docs``
-===================
-
-.. http:get:: /{db}/_all_docs
-    :synopsis: Returns a built-in view of all documents in this database
-
-    Executes the built-in `_all_docs` :ref:`view <views>`, returning all of the
-    documents in the database.  With the exception of the URL parameters
-    (described below), this endpoint works identically to any other view. Refer
-    to the :ref:`view endpoint <api/ddoc/view>` documentation for a complete
-    description of the available query parameters and the format of the returned
-    data.
-
-    :param db: Database name
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :code 200: Request completed successfully
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_all_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 16:22:56 GMT
-        ETag: "1W2DJUZFZSZD9K78UFA3GZWB4"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": 0,
-            "rows": [
-                {
-                    "id": "16e458537602f5ef2a710089dffd9453",
-                    "key": "16e458537602f5ef2a710089dffd9453",
-                    "value": {
-                        "rev": "1-967a00dff5e02add41819138abb3284d"
-                    }
-                },
-                {
-                    "id": "a4c51cdfa2069f3e905c431114001aff",
-                    "key": "a4c51cdfa2069f3e905c431114001aff",
-                    "value": {
-                        "rev": "1-967a00dff5e02add41819138abb3284d"
-                    }
-                },
-                {
-                    "id": "a4c51cdfa2069f3e905c4311140034aa",
-                    "key": "a4c51cdfa2069f3e905c4311140034aa",
-                    "value": {
-                        "rev": "5-6182c9c954200ab5e3c6bd5e76a1549f"
-                    }
-                },
-                {
-                    "id": "a4c51cdfa2069f3e905c431114003597",
-                    "key": "a4c51cdfa2069f3e905c431114003597",
-                    "value": {
-                        "rev": "2-7051cbe5c8faecd085a3fa619e6e6337"
-                    }
-                },
-                {
-                    "id": "f4ca7773ddea715afebc4b4b15d4f0b3",
-                    "key": "f4ca7773ddea715afebc4b4b15d4f0b3",
-                    "value": {
-                        "rev": "2-7051cbe5c8faecd085a3fa619e6e6337"
-                    }
-                }
-            ],
-            "total_rows": 5
-        }
-
-.. http:post:: /{db}/_all_docs
-    :synopsis: Returns a built-in view of all documents in this database
-
-    :method:`POST` `_all_docs` functionality supports identical parameters and behavior
-    as specified in the :get:`/{db}/_all_docs` API but allows for the query string
-    parameters to be supplied as keys in a JSON object in the body of the `POST` request.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_all_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 70
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "keys" : [
-                "Zingylemontart",
-                "Yogurtraita"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: javascript
-
-        {
-            "total_rows" : 2666,
-            "rows" : [
-                {
-                    "value" : {
-                        "rev" : "1-a3544d296de19e6f5b932ea77d886942"
-                    },
-                    "id" : "Zingylemontart",
-                    "key" : "Zingylemontart"
-                },
-                {
-                    "value" : {
-                        "rev" : "1-91635098bfe7d40197a1b98d7ee085fc"
-                    },
-                    "id" : "Yogurtraita",
-                    "key" : "Yogurtraita"
-                }
-            ],
-            "offset" : 0
-        }
-
-.. _api/db/design_docs:
-
-======================
-``/{db}/_design_docs``
-======================
-
-.. versionadded:: 2.2
-
-.. http:get:: /{db}/_design_docs
-    :synopsis: Returns a built-in view of all design documents in this database
-
-    Returns a JSON structure of all of the design documents in a given
-    database. The information is returned as a JSON structure containing meta
-    information about the return structure, including a list of all design
-    documents and basic contents, consisting the ID, revision and key. The key
-    is the design document's ``_id``.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query boolean conflicts: Includes `conflicts` information in response.
-      Ignored if `include_docs` isn't ``true``. Default is ``false``.
-    :query boolean descending: Return the design documents in descending by
-      key order. Default is ``false``.
-    :query string endkey: Stop returning records when the specified key is
-      reached. *Optional*.
-    :query string end_key: Alias for `endkey` param.
-    :query string endkey_docid: Stop returning records when the specified
-        design document ID is reached. *Optional*.
-    :query string end_key_doc_id: Alias for `endkey_docid` param.
-    :query boolean include_docs: Include the full content of the design
-      documents in the return. Default is ``false``.
-    :query boolean inclusive_end: Specifies whether the specified end key
-      should be included in the result. Default is ``true``.
-    :query string key: Return only design documents that match the specified
-      key. *Optional*.
-    :query string keys: Return only design documents that match the specified
-      keys. *Optional*.
-    :query number limit: Limit the number of the returned design documents to
-      the specified number. *Optional*.
-    :query number skip: Skip this number of records before starting to return
-      the results. Default is ``0``.
-    :query string startkey: Return records starting with the specified key.
-      *Optional*.
-    :query string start_key: Alias for `startkey` param.
-    :query string startkey_docid: Return records starting with the specified
-      design document ID. *Optional*.
-    :query string start_key_doc_id: Alias for `startkey_docid` param.
-    :query boolean update_seq: Response includes an ``update_seq`` value
-      indicating which sequence id of the underlying database the view
-      reflects. Default is ``false``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response signature
-    :>json number offset: Offset where the design document list started
-    :>json array rows: Array of view row objects. By default the information
-      returned contains only the design document ID and revision.
-    :>json number total_rows: Number of design documents in the database. Note
-      that this is not the number of rows returned in the actual query.
-    :>json number update_seq: Current update sequence for the database
-    :code 200: Request completed successfully
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_design_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 23 Dec 2017 16:22:56 GMT
-        ETag: "1W2DJUZFZSZD9K78UFA3GZWB4"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": 0,
-            "rows": [
-                {
-                    "id": "_design/ddoc01",
-                    "key": "_design/ddoc01",
-                    "value": {
-                        "rev": "1-7407569d54af5bc94c266e70cbf8a180"
-                    }
-                },
-                {
-                    "id": "_design/ddoc02",
-                    "key": "_design/ddoc02",
-                    "value": {
-                        "rev": "1-d942f0ce01647aa0f46518b213b5628e"
-                    }
-                },
-                {
-                    "id": "_design/ddoc03",
-                    "key": "_design/ddoc03",
-                    "value": {
-                        "rev": "1-721fead6e6c8d811a225d5a62d08dfd0"
-                    }
-                },
-                {
-                    "id": "_design/ddoc04",
-                    "key": "_design/ddoc04",
-                    "value": {
-                        "rev": "1-32c76b46ca61351c75a84fbcbceece2f"
-                    }
-                },
-                {
-                    "id": "_design/ddoc05",
-                    "key": "_design/ddoc05",
-                    "value": {
-                        "rev": "1-af856babf9cf746b48ae999645f9541e"
-                    }
-                }
-            ],
-            "total_rows": 5
-        }
-
-.. http:post:: /{db}/_design_docs
-    :synopsis: Returns a built-in view of all design documents in this database
-
-    :method:`POST` `_design_docs` functionality supports identical parameters and behavior
-    as specified in the :get:`/{db}/_design_docs` API but allows for the query string
-    parameters to be supplied as keys in a JSON object in the body of the `POST` request.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_design_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 70
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "keys" : [
-                "_design/ddoc02",
-                "_design/ddoc05"
-            ]
-        }
-
-    The returned JSON is the all documents structure, but with only the
-    selected keys in the output:
-
-    .. code-block:: javascript
-
-        {
-            "total_rows" : 5,
-            "rows" : [
-                {
-                    "value" : {
-                        "rev" : "1-d942f0ce01647aa0f46518b213b5628e"
-                    },
-                    "id" : "_design/ddoc02",
-                    "key" : "_design/ddoc02"
-                },
-                {
-                    "value" : {
-                        "rev" : "1-af856babf9cf746b48ae999645f9541e"
-                    },
-                    "id" : "_design/ddoc05",
-                    "key" : "_design/ddoc05"
-                }
-            ],
-            "offset" : 0
-        }
-
-Sending multiple queries to a database
-======================================
-
-.. versionadded:: 2.2
-
-.. http:post:: /{db}/_all_docs/queries
-    :synopsis: Returns results for the specified queries
-
-    Executes multiple specified built-in view queries of all documents in this
-    database. This enables you to request multiple queries in a single
-    request, in place of multiple :post:`/{db}/_all_docs` requests.
-
-    :param db: Database name
-
-    :<header Content-Type: - :mimetype:`application/json`
-    :<header Accept: - :mimetype:`application/json`
-
-    :<json queries: An array of query objects with fields for the
-        parameters of each individual view query to be executed. The field names
-        and their meaning are the same as the query parameters of a
-        regular :ref:`_all_docs request <api/db/all_docs>`.
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json array results: An array of result objects - one for each query. Each
-        result object contains the same fields as the response to a regular
-        :ref:`_all_docs request <api/db/all_docs>`.
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 404: Specified database is missing
-    :code 500: Query execution error
-
-**Request**:
-
-.. code-block:: http
-
-    POST /db/_all_docs/queries HTTP/1.1
-    Content-Type: application/json
-    Accept: application/json
-    Host: localhost:5984
-
-    {
-        "queries": [
-            {
-                "keys": [
-                    "meatballs",
-                    "spaghetti"
-                ]
-            },
-            {
-                "limit": 3,
-                "skip": 2
-            }
-        ]
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 20 Dec 2017 11:17:07 GMT
-    ETag: "1H8RGBCK3ABY6ACDM7ZSC30QK"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "results" : [
-            {
-                "rows": [
-                    {
-                        "id": "meatballs",
-                        "key": "meatballs",
-                        "value": 1
-                    },
-                    {
-                        "id": "spaghetti",
-                        "key": "spaghetti",
-                        "value": 1
-                    }
-                ],
-                "total_rows": 3
-            },
-            {
-                "offset" : 2,
-                "rows" : [
-                    {
-                        "id" : "Adukiandorangecasserole-microwave",
-                        "key" : "Aduki and orange casserole - microwave",
-                        "value" : [
-                            null,
-                            "Aduki and orange casserole - microwave"
-                        ]
-                    },
-                    {
-                        "id" : "Aioli-garlicmayonnaise",
-                        "key" : "Aioli - garlic mayonnaise",
-                        "value" : [
-                            null,
-                            "Aioli - garlic mayonnaise"
-                        ]
-                    },
-                    {
-                        "id" : "Alabamapeanutchicken",
-                        "key" : "Alabama peanut chicken",
-                        "value" : [
-                            null,
-                            "Alabama peanut chicken"
-                        ]
-                    }
-                ],
-                "total_rows" : 2667
-            }
-        ]
-    }
-
-.. Note::
-    The multiple queries are also supported in /db/_local_docs/queries and
-    /db/_design_docs/queries (similar to /db/_all_docs/queries).
-
-.. _api/db/bulk_get:
-
-===================
-``/{db}/_bulk_get``
-===================
-
-.. http:post:: /{db}/_bulk_get
-    :synopsis: Fetches several documents at the given revisions
-
-    This method can be called to query several documents in bulk. It is well
-    suited for fetching a specific revision of documents, as replicators do for
-    example, or for getting revision history.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`multipart/related`
-                     - :mimetype:`multipart/mixed`
-    :<header Content-Type: :mimetype:`application/json`
-    :query boolean revs: Give the revisions history
-    :<json array docs: List of document objects, with ``id``, and optionally
-      ``rev`` and ``atts_since``
-    :>header Content-Type: - :mimetype:`application/json`
-    :>json object results: an array of results for each requested document/rev
-      pair. ``id`` key lists the requested document ID, ``docs`` contains a
-      single-item array of objects, each of which has either an ``error`` key and
-      value describing the error, or ``ok`` key and associated value of the
-      requested document, with the additional ``_revisions`` property that lists
-      the parent revisions if ``revs=true``.
-    :code 200: Request completed successfully
-    :code 400: The request provided invalid JSON data or invalid query parameter
-    :code 401: Read permission required
-    :code 404: Invalid database name
-    :code 415: Bad :header:`Content-Type` value
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_bulk_get HTTP/1.1
-        Accept: application/json
-        Content-Type:application/json
-        Host: localhost:5984
-
-        {
-            "docs": [
-                {
-                    "id": "foo"
-                    "rev": "4-753875d51501a6b1883a9d62b4d33f91",
-                },
-                {
-                    "id": "foo"
-                    "rev": "1-4a7e4ae49c4366eaed8edeaea8f784ad",
-                },
-                {
-                    "id": "bar",
-                }
-                {
-                    "id": "baz",
-                }
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 19 Mar 2018 15:27:34 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-          "results": [
-            {
-              "id": "foo",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "foo",
-                    "_rev": "4-753875d51501a6b1883a9d62b4d33f91",
-                    "value": "this is foo",
-                    "_revisions": {
-                      "start": 4,
-                      "ids": [
-                        "753875d51501a6b1883a9d62b4d33f91",
-                        "efc54218773c6acd910e2e97fea2a608",
-                        "2ee767305024673cfb3f5af037cd2729",
-                        "4a7e4ae49c4366eaed8edeaea8f784ad"
-                      ]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "id": "foo",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "foo",
-                    "_rev": "1-4a7e4ae49c4366eaed8edeaea8f784ad",
-                    "value": "this is the first revision of foo",
-                    "_revisions": {
-                      "start": 1,
-                      "ids": [
-                        "4a7e4ae49c4366eaed8edeaea8f784ad"
-                      ]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "id": "bar",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "bar",
-                    "_rev": "2-9b71d36dfdd9b4815388eb91cc8fb61d",
-                    "baz": true,
-                    "_revisions": {
-                      "start": 2,
-                      "ids": [
-                        "9b71d36dfdd9b4815388eb91cc8fb61d",
-                        "309651b95df56d52658650fb64257b97"
-                      ]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "id": "baz",
-              "docs": [
-                {
-                  "error": {
-                    "id": "baz",
-                    "rev": "undefined",
-                    "error": "not_found",
-                    "reason": "missing"
-                  }
-                }
-              ]
-            }
-          ]
-        }
-
-    Example response with a conflicted document:
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_bulk_get HTTP/1.1
-        Accept: application/json
-        Content-Type:application/json
-        Host: localhost:5984
-
-        {
-            "docs": [
-                {
-                    "id": "a"
-                }
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 19 Mar 2018 15:27:34 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-          "results": [
-            {
-              "id": "a",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "a",
-                    "_rev": "1-23202479633c2b380f79507a776743d5",
-                    "a": 1
-                  }
-                },
-                {
-                  "ok": {
-                    "_id": "a",
-                    "_rev": "1-967a00dff5e02add41819138abb3284d"
-                  }
-                }
-              ]
-            }
-          ]
-        }
-
-.. _api/db/bulk_docs:
-
-====================
-``/{db}/_bulk_docs``
-====================
-
-.. http:post:: /{db}/_bulk_docs
-    :synopsis: Inserts or updates multiple documents in to the database in
-               a single request
-
-    The bulk document API allows you to create and update multiple documents
-    at the same time within a single request. The basic operation is similar
-    to creating or updating a single document, except that you batch the
-    document structure and information.
-
-    When creating new documents the document ID (``_id``) is optional.
-
-    For updating existing documents, you must provide the document ID, revision
-    information (``_rev``), and new document values.
-
-    In case of batch deleting documents all fields as document ID, revision
-    information and deletion status (``_deleted``) are required.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-
-    :<json array docs: List of documents objects
-    :<json boolean new_edits: If ``false``, prevents the database from
-      assigning them new revision IDs. Default is ``true``. *Optional*
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>jsonarr string id: Document ID
-    :>jsonarr string rev: New document revision token. Available
-      if document has saved without errors. *Optional*
-    :>jsonarr string error: Error type. *Optional*
-    :>jsonarr string reason: Error reason. *Optional*
-    :code 201: Document(s) have been created or updated
-    :code 400: The request provided invalid JSON data
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_bulk_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 109
-        Content-Type:application/json
-        Host: localhost:5984
-
-        {
-            "docs": [
-                {
-                    "_id": "FishStew"
-                },
-                {
-                    "_id": "LambStew",
-                    "_rev": "2-0786321986194c92dd3b57dfbfc741ce",
-                    "_deleted": true
-                }
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 144
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 00:15:05 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        [
-            {
-                "ok": true,
-                "id": "FishStew",
-                "rev":" 1-967a00dff5e02add41819138abb3284d"
-            },
-            {
-                "ok": true,
-                "id": "LambStew",
-                "rev": "3-f9c62b2169d0999103e9f41949090807"
-            }
-        ]
-
-Inserting Documents in Bulk
-===========================
-
-Each time a document is stored or updated in CouchDB, the internal B-tree
-is updated. Bulk insertion provides efficiency gains in both storage space,
-and time, by consolidating many of the updates to intermediate B-tree nodes.
-
-It is not intended as a way to perform ``ACID``-like transactions in CouchDB,
-the only transaction boundary within CouchDB is a single update to a single
-database. The constraints are detailed in :ref:`api/db/bulk_docs/semantics`.
-
-To insert documents in bulk into a database you need to supply a JSON
-structure with the array of documents that you want to add to the database.
-You can either include a document ID, or allow the document ID to be
-automatically generated.
-
-For example, the following update inserts three new documents, two with the
-supplied document IDs, and one which will have a document ID generated:
-
-.. code-block:: http
-
-    POST /source/_bulk_docs HTTP/1.1
-    Accept: application/json
-    Content-Length: 323
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "docs": [
-            {
-                "_id": "FishStew",
-                "servings": 4,
-                "subtitle": "Delicious with freshly baked bread",
-                "title": "FishStew"
-            },
-            {
-                "_id": "LambStew",
-                "servings": 6,
-                "subtitle": "Serve with a whole meal scone topping",
-                "title": "LambStew"
-            },
-            {
-                "servings": 8,
-                "subtitle": "Hand-made dumplings make a great accompaniment",
-                "title": "BeefStew"
-            }
-        ]
-    }
-
-The return type from a bulk insertion will be :statuscode:`201`,
-with the content of the returned structure indicating specific success
-or otherwise messages on a per-document basis.
-
-The return structure from the example above contains a list of the
-documents created, here with the combination and their revision IDs:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 215
-    Content-Type: application/json
-    Date: Sat, 26 Oct 2013 00:10:39 GMT
-    Server: CouchDB (Erlang OTP)
-
-    [
-        {
-            "id": "FishStew",
-            "ok": true,
-            "rev": "1-6a466d5dfda05e613ba97bd737829d67"
-        },
-        {
-            "id": "LambStew",
-            "ok": true,
-            "rev": "1-648f1b989d52b8e43f05aa877092cc7c"
-        },
-        {
-            "id": "00a271787f89c0ef2e10e88a0c0003f0",
-            "ok": true,
-            "rev": "1-e4602845fc4c99674f50b1d5a804fdfa"
-        }
-    ]
-
-For details of the semantic content and structure of the returned JSON see
-:ref:`api/db/bulk_docs/semantics`. Conflicts and validation errors when
-updating documents in bulk must be handled separately; see
-:ref:`api/db/bulk_docs/validation`.
-
-Updating Documents in Bulk
-==========================
-
-The bulk document update procedure is similar to the insertion
-procedure, except that you must specify the document ID and current
-revision for every document in the bulk update JSON string.
-
-For example, you could send the following request:
-
-.. code-block:: http
-
-    POST /recipes/_bulk_docs HTTP/1.1
-    Accept: application/json
-    Content-Length: 464
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "docs": [
-            {
-                "_id": "FishStew",
-                "_rev": "1-6a466d5dfda05e613ba97bd737829d67",
-                "servings": 4,
-                "subtitle": "Delicious with freshly baked bread",
-                "title": "FishStew"
-            },
-            {
-                "_id": "LambStew",
-                "_rev": "1-648f1b989d52b8e43f05aa877092cc7c",
-                "servings": 6,
-                "subtitle": "Serve with a whole meal scone topping",
-                "title": "LambStew"
-            },
-            {
-                "_id": "BeefStew",
-                "_rev": "1-e4602845fc4c99674f50b1d5a804fdfa",
-                "servings": 8,
-                "subtitle": "Hand-made dumplings make a great accompaniment",
-                "title": "BeefStew"
-            }
-        ]
-    }
-
-The return structure is the JSON of the updated documents, with the new
-revision and ID information:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 215
-    Content-Type: application/json
-    Date: Sat, 26 Oct 2013 00:10:39 GMT
-    Server: CouchDB (Erlang OTP)
-
-    [
-        {
-            "id": "FishStew",
-            "ok": true,
-            "rev": "2-2bff94179917f1dec7cd7f0209066fb8"
-        },
-        {
-            "id": "LambStew",
-            "ok": true,
-            "rev": "2-6a7aae7ac481aa98a2042718d09843c4"
-        },
-        {
-            "id": "BeefStew",
-            "ok": true,
-            "rev": "2-9801936a42f06a16f16c30027980d96f"
-        }
-    ]
-
-You can optionally delete documents during a bulk update by adding the
-``_deleted`` field with a value of ``true`` to each document ID/revision
-combination within the submitted JSON structure.
-
-The return type from a bulk insertion will be :statuscode:`201`, with the
-content of the returned structure indicating specific success or otherwise
-messages on a per-document basis.
-
-The content and structure of the returned JSON will depend on the transaction
-semantics being used for the bulk update; see :ref:`api/db/bulk_docs/semantics`
-for more information. Conflicts and validation errors when updating documents
-in bulk must be handled separately; see :ref:`api/db/bulk_docs/validation`.
-
-.. _api/db/bulk_docs/semantics:
-
-Bulk Documents Transaction Semantics
-====================================
-
-Bulk document operations are **non-atomic**. This means that CouchDB does not
-guarantee that any individual document included in the bulk update (or insert)
-will be saved when you send the request. The response will contain the list of
-documents successfully inserted or updated during the process. In the event of
-a crash, some of the documents may have been successfully saved, while others
-lost.
-
-The response structure will indicate whether the document was updated by
-supplying the new ``_rev`` parameter indicating a new document revision was
-created. If the update failed, you will get an ``error`` of type ``conflict``.
-For example:
-
-   .. code-block:: javascript
-
-       [
-           {
-               "id" : "FishStew",
-               "error" : "conflict",
-               "reason" : "Document update conflict."
-           },
-           {
-               "id" : "LambStew",
-               "error" : "conflict",
-               "reason" : "Document update conflict."
-           },
-           {
-               "id" : "BeefStew",
-               "error" : "conflict",
-               "reason" : "Document update conflict."
-           }
-       ]
-
-In this case no new revision has been created and you will need to submit the
-document update, with the correct revision tag, to update the document.
-
-Replication of documents is independent of the type of insert or update.
-The documents and revisions created during a bulk insert or update are
-replicated in the same way as any other document.
-
-.. _api/db/bulk_docs/validation:
-
-Bulk Document Validation and Conflict Errors
-============================================
-
-The JSON returned by the ``_bulk_docs`` operation consists of an array
-of JSON structures, one for each document in the original submission.
-The returned JSON structure should be examined to ensure that all of the
-documents submitted in the original request were successfully added to
-the database.
-
-When a document (or document revision) is not correctly committed to the
-database because of an error, you should check the ``error`` field to
-determine error type and course of action. Errors will be one of the
-following type:
-
--  **conflict**
-
-   The document as submitted is in conflict. The new revision will not have been
-   created and you will need to re-submit the document to the database.
-
-   Conflict resolution of documents added using the bulk docs interface
-   is identical to the resolution procedures used when resolving
-   conflict errors during replication.
-
--  **forbidden**
-
-   Entries with this error type indicate that the validation routine
-   applied to the document during submission has returned an error.
-
-   For example, if your :ref:`validation routine <vdufun>` includes
-   the following:
-
-   .. code-block:: javascript
-
-       throw({forbidden: 'invalid recipe ingredient'});
-
-   The error response returned will be:
-
-   .. code-block:: http
-
-       HTTP/1.1 201 Created
-       Cache-Control: must-revalidate
-       Content-Length: 80
-       Content-Type: application/json
-       Date: Sat, 26 Oct 2013 00:05:17 GMT
-       Server: CouchDB (Erlang OTP)
-
-       [
-           {
-               "id": "LambStew",
-               "error": "forbidden",
-               "reason": "invalid recipe ingredient"
-           }
-       ]
diff --git a/src/api/database/changes.rst b/src/api/database/changes.rst
deleted file mode 100644
index d1e3655..0000000
--- a/src/api/database/changes.rst
+++ /dev/null
@@ -1,750 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/changes:
-
-================
-``/db/_changes``
-================
-
-.. http:get:: /{db}/_changes
-    :synopsis: Returns changes for the given database
-
-    Returns a sorted list of changes made to documents in the database, in time
-    order of application, can be obtained from the database's ``_changes``
-    resource. Only the most recent change for a given document is guaranteed to
-    be provided, for example if a document has had fields added, and then
-    deleted, an API client checking for changes will not necessarily receive
-    the intermediate state of added documents.
-
-    This can be used to listen for update and modifications to the database for
-    post processing or synchronization, and for practical purposes,
-    a continuously connected ``_changes`` feed is a reasonable approach for
-    generating a real-time log for most applications.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/event-stream`
-                     - :mimetype:`text/plain`
-    :<header Last-Event-ID: ID of the last events received by the server on a
-        previous connection. Overrides `since` query parameter.
-    :query array doc_ids: List of document IDs to filter the changes feed as
-        valid JSON array. Used with :ref:`_doc_ids <changes/filter/doc_ids>`
-        filter. Since `length of URL is limited`_, it is better to use
-        :post:`/{db}/_changes` instead.
-    :query boolean conflicts: Includes `conflicts` information in response.
-        Ignored if `include_docs` isn't ``true``. Default is ``false``.
-    :query boolean descending: Return the change results in descending sequence
-        order (most recent change first). Default is ``false``.
-    :query string feed: - **normal** Specifies :ref:`Normal Polling Mode
-                          <changes/normal>`. All past changes are returned
-                          immediately. *Default.*
-                        - **longpoll** Specifies :ref:`Long Polling Mode
-                          <changes/longpoll>`. Waits until at least one change
-                          has occurred, sends the change, then closes the
-                          connection. Most commonly used in conjunction with
-                          ``since=now``, to wait for the next change.
-                        - **continuous** Sets :ref:`Continuous Mode
-                          <changes/continuous>`. Sends a line of JSON per
-                          event. Keeps the socket open until ``timeout``.
-                        - **eventsource** Sets :ref:`Event Source Mode
-                          <changes/eventsource>`. Works the same as Continuous
-                          Mode, but sends the events in `EventSource
-                          <http://dev.w3.org/html5/eventsource/>`_ format.
-    :query string filter: Reference to a :ref:`filter function <filterfun>`
-        from a design document that will filter whole stream emitting only
-        filtered events. See the section `Change Notifications in the book
-        CouchDB The Definitive Guide`_ for more information.
-    :query number heartbeat: Period in *milliseconds* after which an empty
-        line is sent in the results. Only applicable for :ref:`longpoll
-        <changes/longpoll>`, :ref:`continuous <changes/continuous>`, and
-        :ref:`eventsource <changes/eventsource>` feeds. Overrides any timeout
-        to keep the feed alive indefinitely. Default is ``60000``. May be
-        ``true`` to use default value.
-    :query boolean include_docs: Include the associated document with each
-        result. If there are conflicts, only the winning revision is returned.
-        Default is ``false``.
-    :query boolean attachments: Include the Base64-encoded content of
-        :ref:`attachments <api/doc/attachments>` in the documents that
-        are included if `include_docs` is ``true``. Ignored if `include_docs`
-        isn't ``true``. Default is ``false``.
-    :query boolean att_encoding_info: Include encoding information in attachment
-        stubs if `include_docs` is ``true`` and the particular attachment is
-        compressed. Ignored if `include_docs` isn't ``true``.
-        Default is ``false``.
-    :query number last-event-id: Alias of `Last-Event-ID` header.
-    :query number limit: Limit number of result rows to the specified value
-        (note that using ``0`` here has the same effect as ``1``).
-    :query since: Start the results from the change immediately after the given
-        update sequence. Can be valid update sequence or ``now`` value.
-        Default is ``0``.
-    :query string style: Specifies how many revisions are returned in
-        the changes array. The default, ``main_only``, will only return
-        the current "winning" revision; ``all_docs`` will return all leaf
-        revisions (including conflicts and deleted former conflicts).
-    :query number timeout: Maximum period in *milliseconds* to wait for a change
-        before the response is sent, even if there are no results.
-        Only applicable for :ref:`longpoll <changes/longpoll>` or
-        :ref:`continuous <changes/continuous>` feeds.
-        Default value is specified by :config:option:`httpd/changes_timeout`
-        configuration option. Note that ``60000`` value is also the default
-        maximum timeout to prevent undetected dead connections.
-    :query string view: Allows to use view functions as filters. Documents
-        counted as "passed" for view filter in case if map function emits
-        at least one record for them.
-        See :ref:`changes/filter/view` for more info.
-    :query number seq_interval: When fetching changes in a batch, setting the
-        *seq_interval* parameter tells CouchDB to only calculate the update seq
-        with every Nth result returned. By setting **seq_interval=<batch size>**
-        , where ``<batch size>`` is the number of results requested per batch,
-        load can be reduced on the source CouchDB database; computing the seq
-        value across many shards (esp. in highly-sharded databases) is expensive
-        in a heavily loaded CouchDB cluster.
-    :>header Cache-Control: ``no-cache`` if changes feed is
-        :ref:`eventsource <changes/eventsource>`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/event-stream`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response hash if changes feed is `normal`
-    :>header Transfer-Encoding: ``chunked``
-    :>json json last_seq: Last change update sequence
-    :>json number pending: Count of remaining items in the feed
-    :>json array results: Changes made to a database
-    :code 200: Request completed successfully
-    :code 400: Bad request
-
-    The ``results`` field of database changes:
-
-    :json array changes: List of document's leaves with single field ``rev``.
-    :json string id: Document ID.
-    :json json seq: Update sequence.
-    :json bool deleted: ``true`` if the document is deleted.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_changes?style=all_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 00:54:58 GMT
-        ETag: "6ASLEKEMSRABT0O5XY9UPO9Z"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "last_seq": "5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV-8_o5i",
-            "pending": 0,
-            "results": [
-                {
-                    "changes": [
-                        {
-                            "rev": "2-7051cbe5c8faecd085a3fa619e6e6337"
-                        }
-                    ],
-                    "id": "6478c2ae800dfc387396d14e1fc39626",
-                    "seq": "3-g1AAAAG3eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcjfGaQZmaUmmZClM8gZhyAmHGfsG0PICrBPmQC22ZqbGRqamyIqSsLAAArcXo"
-                },
-                {
-                    "changes": [
-                        {
-                            "rev": "3-7379b9e515b161226c6559d90c4dc49f"
-                        }
-                    ],
-                    "deleted": true,
-                    "id": "5bbc9ca465f1b0fcd62362168a7c8831",
-                    "seq": "4-g1AAAAHXeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBMZc4EC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HqQ_kQG3qgSQqnoUtxoYGZkZG5uS4NY8FiDJ0ACkgAbNx2cfROUCiMr9CJ8ZpJkZpaaZEOUziBkHIGbcJ2zbA4hKsA-ZwLaZGhuZmhobYurKAgCz33kh"
-                },
-                {
-                    "changes": [
-                        {
-                            "rev": "6-460637e73a6288cb24d532bf91f32969"
-                        },
-                        {
-                            "rev": "5-eeaa298781f60b7bcae0c91bdedd1b87"
-                        }
-                    ],
-                    "id": "729eb57437745e506b333068fff665ae",
-                    "seq": "5-g1AAAAIReJyVkE0OgjAQRkcwUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEB-yD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA"
-                }
-            ]
-        }
-
-.. _length of URL is limited: http://stackoverflow.com/a/417184/965635
-
-.. versionchanged:: 0.11.0 added ``include_docs`` parameter
-.. versionchanged:: 1.2.0 added ``view`` parameter and special value `_view`
-   for ``filter`` one
-.. versionchanged:: 1.3.0 ``since`` parameter could take `now` value to start
-   listen changes since current seq number.
-.. versionchanged:: 1.3.0 ``eventsource`` feed type added.
-.. versionchanged:: 1.4.0 Support ``Last-Event-ID`` header.
-.. versionchanged:: 1.6.0 added ``attachments`` and ``att_encoding_info``
-   parameters
-.. versionchanged:: 2.0.0 update sequences can be any valid json object,
-   added ``seq_interval``
-
-.. note::
-    If the specified replicas of the shards in any given since value are
-    unavailable, alternative replicas are selected, and the last known
-    checkpoint between them is used. If this happens, you might see changes
-    again that you have previously seen. Therefore, an application making use
-    of the `_changes` feed should be ‘idempotent’, that is, able to receive the
-    same data multiple times, safely.
-
-.. note::
-    Cloudant Sync and PouchDB already optimize the replication process by
-    setting ``seq_interval`` parameter to the number of results expected per
-    batch. This parameter increases throughput by reducing latency between
-    sequential requests in bulk document transfers. This has resulted in up to
-    a 20% replication performance improvement in highly-sharded databases.
-
-.. warning::
-    Using the ``attachments`` parameter to include attachments in the changes
-    feed is not recommended for large attachment sizes. Also note that the
-    Base64-encoding that is used leads to a 33% overhead (i.e. one third) in
-    transfer size for attachments.
-
-.. warning::
-    The results returned by `_changes` are partially ordered. In other words,
-    the order is not guaranteed to be preserved for multiple calls.
-
-.. http:post:: /{db}/_changes
-    :synopsis: Returns changes for the given database for certain document IDs
-
-    Requests the database changes feed in the same way as
-    :get:`/{db}/_changes` does, but is widely used with
-    ``?filter=_doc_ids`` query parameter and allows one to pass a larger list of
-    document IDs to filter.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /recipes/_changes?filter=_doc_ids HTTP/1.1
-        Accept: application/json
-        Content-Length: 40
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "doc_ids": [
-                "SpaghettiWithMeatballs"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 28 Sep 2013 07:23:09 GMT
-        ETag: "ARIHFWL3I7PIS0SPVTFU6TLR2"
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "last_seq": "5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV8_o5i",
-            "pending": 0,
-            "results": [
-                {
-                    "changes": [
-                        {
-                            "rev": "13-bcb9d6388b60fd1e960d9ec4e8e3f29e"
-                        }
-                    ],
-                    "id": "SpaghettiWithMeatballs",
-                    "seq":  "5-g1AAAAIReJyVkE0OgjAQRkcwUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEB-yD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA"
-                }
-            ]
-        }
-
-.. _changes:
-
-Changes Feeds
-=============
-
-.. _changes/normal:
-
-Polling
--------
-
-By default all changes are immediately returned within the JSON body::
-
-    GET /somedatabase/_changes HTTP/1.1
-
-.. code-block:: javascript
-
-    {"results":[
-    {"seq":"1-g1AAAAF9eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P__7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcTVnkAovI-YZUPICpBvs0CAN1eY_c","id":"fresh","changes":[{"rev":"1-967a00dff5e02add41819138abb3284d"}]},
-    {"seq":"3-g1AAAAG3eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcjfGaQZmaUmmZClM8gZhyAmHGfsG0PICrBPmQC22ZqbGRqamyIqSsLAAArcXo","id":"updated","changes":[{"rev":"2-7051cbe5c8faecd085a3fa619e6e6337CFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEB-yD296eNOzzoRMRLRZ98rkHS_veG [...]
-    ],
-    "last_seq":"5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV-8_o5i",
-    "pending": 0}
-
-``results`` is the list of changes in sequential order. New and changed
-documents only differ in the value of the rev; deleted documents include the
-``"deleted": true`` attribute. (In the ``style=all_docs mode``, deleted applies
-only to the current/winning revision. The other revisions listed might be
-deleted even if there is no deleted property; you have to ``GET`` them
-individually to make sure.)
-
-``last_seq`` is the update sequence of the last update returned (Equivalent
-to the last item in the results).
-
-Sending a ``since`` param in the query string skips all changes up to and
-including the given update sequence:
-
-.. code-block:: http
-
-    GET /somedatabase/_changes?since=4-g1AAAAHXeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBMZc4EC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HqQ_kQG3qgSQqnoUtxoYGZkZG5uS4NY8FiDJ0ACkgAbNx2cfROUCiMr9CJ8ZpJkZpaaZEOUziBkHIGbcJ2zbA4hKsA-ZwLaZGhuZmhobYurKAgCz33kh HTTP/1.1
-
-The return structure for ``normal`` and ``longpoll`` modes is a JSON
-array of changes objects, and the last update sequence.
-
-In the return format for ``continuous`` mode, the server sends a ``CRLF``
-(carriage-return, linefeed) delimited line for each change. Each line
-contains the `JSON object` described above.
-
-You can also request the full contents of each document change (instead
-of just the change notification) by using the ``include_docs`` parameter.
-
-.. code-block:: javascript
-
-    {
-        "last_seq": "5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV-8_o5i",
-        "pending": 0,
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "2-eec205a9d413992850a6e32678485900"
-                    }
-                ],
-                "deleted": true,
-                "id": "deleted",
-                "seq":  "5-g1AAAAIReJyVkE0OgjAQRkcwUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEByD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA",
-            }
-        ]
-    }
-
-.. _changes/longpoll:
-
-Long Polling
-------------
-
-The `longpoll` feed, probably most applicable for a browser, is a more
-efficient form of polling that waits for a change to occur before the response
-is sent. `longpoll` avoids the need to frequently poll CouchDB to discover
-nothing has changed!
-
-The request to the server will remain open until a change is made on the
-database and is subsequently transferred, and then the connection will close.
-This is low load for both server and client.
-
-The response is basically the same JSON as is sent for the `normal` feed.
-
-Because the wait for a change can be significant you can set a
-timeout before the connection is automatically closed (the
-``timeout`` argument). You can also set a heartbeat interval (using
-the ``heartbeat`` query argument), which sends a newline to keep the
-connection active.
-
-.. _changes/continuous:
-
-Continuous
-----------
-
-Continually polling the CouchDB server is not ideal - setting up new HTTP
-connections just to tell the client that nothing happened puts unnecessary
-strain on CouchDB.
-
-A continuous feed stays open and connected to the database until explicitly
-closed and changes are sent to the client as they happen, i.e. in near
-real-time.
-
-As with the `longpoll` feed type you can set both the timeout and heartbeat
-intervals to ensure that the connection is kept open for new changes
-and updates.
-
-The continuous feed's response is a little different than the other feed types
-to simplify the job of the client - each line of the response is either empty
-or a JSON object representing a single change, as found in the normal feed's
-results.
-
-If `limit` has been specified the feed will end with a `{ last_seq }` object.
-
-.. code-block:: http
-
-    GET /somedatabase/_changes?feed=continuous HTTP/1.1
-
-.. code-block:: javascript
-
-    {"seq":"1-g1AAAAF9eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcTVnkAovI-YZUPICpBvs0CAN1eY_c","id":"fresh","changes":[{"rev":"5-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D666H6GcH6DYyMzIyNTUnwRR4LkGRoAFJAg-YjwiMtOdXCwJyU8ICYtABi0n6EnwzSzIxS00yI8hPEjAMQM-5nJTI [...]
-    {"seq":"3-g1AAAAHReJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D660H6ExlwqspjAZIMDUAKqHA-yCZGiEuTUy0MzEnxL8SkBRCT9iPcbJBmZpSaZkKUmyFmHICYcZ-wux9AVIJ8mAUABgp6XQ","id":"deleted","changes":[{"rev":"2-eec205a9d413992850a6e32678485900"}],"deleted":true}
-    ... tum tee tum ...
-    {"seq":"6-g1AAAAIreJyVkEsKwjAURWMrqCOXoCuQ9MU0OrI70XyppcaRY92J7kR3ojupaVNopRQsgRd4yTlwb44QmqahQnN7VjpKImAr7E6Uu4eAI7EoiiJLQx6c3GIiuVJcx93vvQqxdFPsaguqLAY04YwpNLtYpc3RatXPJyW__-EFllst4D_-UPLXmh9VPAaICaEDUtixm-jmLie6N30YqTeYDenDmx7e9GwyYRODNuu_MnnHyzverV6AMkPkAMfHO1rdUAKUkqhLZV-_0o5j","id":"updated","changes":[{"rev":"3-825cb35de44c433bfb2df415563a19de"}]}
-
-Obviously, `... tum tee tum ...` does not appear in the actual response, but
-represents a long pause before the change with seq 6 occurred.
-
-.. _Change Notifications in the book CouchDB The Definitive Guide: http://guide.couchdb.org/draft/notifications.html
-
-.. _changes/eventsource:
-
-Event Source
-------------
-
-The `eventsource` feed provides push notifications that can be consumed in
-the form of DOM events in the browser. Refer to the `W3C eventsource
-specification`_ for further details. CouchDB also honours the ``Last-Event-ID``
-parameter.
-
-.. code-block:: http
-
-    GET /somedatabase/_changes?feed=eventsource HTTP/1.1
-
-.. code-block:: javascript
-
-    // define the event handling function
-    if (window.EventSource) {
-
-        var source = new EventSource("/somedatabase/_changes?feed=eventsource");
-        source.onerror = function(e) {
-            alert('EventSource failed.');
-        };
-
-        var results = [];
-        var sourceListener = function(e) {
-            var data = JSON.parse(e.data);
-            results.push(data);
-        };
-
-        // start listening for events
-        source.addEventListener('message', sourceListener, false);
-
-        // stop listening for events
-        source.removeEventListener('message', sourceListener, false);
-
-    }
-
-If you set a heartbeat interval (using the ``heartbeat`` query argument),
-CouchDB will send a ``hearbeat`` event that you can subscribe to with:
-
-.. code-block:: javascript
-
-    source.addEventListener('heartbeat', function () {}, false);
-
-This can be monitored by the client application to restart the EventSource
-connection if needed (i.e. if the TCP connection gets stuck in a half-open
-state).
-
-.. note::
-    EventSource connections are subject to cross-origin resource sharing
-    restrictions. You might need to configure :ref:`CORS support
-    <cors>` to get the EventSource to work in your application.
-
-.. _W3C eventsource specification: http://www.w3.org/TR/eventsource/
-
-.. _changes/filter:
-
-Filtering
-=========
-
-You can filter the contents of the changes feed in a number of ways. The
-most basic way is to specify one or more document IDs to the query. This
-causes the returned structure value to only contain changes for the
-specified IDs. Note that the value of this query argument should be a
-JSON formatted array.
-
-You can also filter the ``_changes`` feed by defining a filter function
-within a design document. The specification for the filter is the same
-as for replication filters. You specify the name of the filter function
-to the ``filter`` parameter, specifying the design document name and
-:ref:`filter name <filterfun>`. For example:
-
-.. code-block:: http
-
-    GET /db/_changes?filter=design_doc/filtername HTTP/1.1
-
-Additionally, a couple of built-in filters are available and described
-below.
-
-.. _changes/filter/doc_ids:
-
-_doc_ids
---------
-
-This filter accepts only changes for documents which ID in specified in
-``doc_ids`` query parameter or payload's object array. See
-:post:`/{db}/_changes` for an example.
-
-.. _changes/filter/selector:
-
-_selector
----------
-
-.. versionadded:: 2.0
-
-This filter accepts only changes for documents which match a specified
-selector, defined using the same :ref:`selector
-syntax <find/selectors>` used for :ref:`_find <api/db/_find>`.
-
-This is significantly more efficient than using a JavaScript filter
-function and is the recommended option if filtering on document attributes only.
-
-Note that, unlike JavaScript filters, selectors do not have access to the
-request object.
-
-**Request**:
-
-.. code-block:: http
-
-    POST /recipes/_changes?filter=_selector HTTP/1.1
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "selector": { "_id": { "$regex": "^_design/" } }
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Tue, 06 Sep 2016 20:03:23 GMT
-    Etag: "1H8RGBCK3ABY6ACDM7ZSC30QK"
-    Server: CouchDB (Erlang OTP/18)
-    Transfer-Encoding: chunked
-
-    {
-        "last_seq": "11-g1AAAAIreJyVkEEKwjAQRUOrqCuPoCeQZGIaXdmbaNIk1FLjyrXeRG-iN9Gb1LQRaimFlsAEJnkP_s8RQtM0VGhuz0qTmABfYXdI7h4CgeSiKIosDUVwcotJIpQSOmp_71TIpZty97OgymJAU8G5QrOLVdocrVbdfFzy-wYvcbLVEvrxh5K_NlJggIhSNiCFHbmJbu5yonttMoneYD6kD296eNOzzoRNBNqse2Xyjpd3vP96AcYNTQY4Pt5RdTOuHIwCY5S0qewLwY6OaA",
-        "pending": 0,
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "10-304cae84fd862832ea9814f02920d4b2"
-                    }
-                ],
-                "id": "_design/ingredients",
-                "seq": "8-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D666H6GcH6DYyMzIyNTUnwRR4LkGRoAFJAg-ZnJTIQULkAonI_ws0GaWZGqWkmRLkZYsYBiBn3Cdv2AKIS7ENWsG2mxkampsaGmLqyAOYpgEo"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "123-6f7c1b7c97a9e4f0d22bdf130e8fd817"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/cookbook",
-                "seq": "9-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D661F8YWBkZGZsbEqCL_JYgCRDA5ACGjQ_K5GBgMoFEJX7EW42SDMzSk0zIcrNEDMOQMy4T9i2BxCVYB-ygm0zNTYyNTU2xNSVBQDnK4BL"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "6-5b8a52c22580e922e792047cff3618f3"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/meta",
-                "seq": "11-g1AAAAIReJyVkE0OgjAQRiegUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloQhO7mGSCKWEjtrtnQq5dFXufhaoLIZoKjhXMLtYpc3RatXNxyW_b_ASJVstST_-UPLXRgpESEQpG5DCjlyFm7uc6F6bTKI3iA_Zhzc9vOlZZ0ImItqse2Xyjpd3vDMBfzo_vrPawLiaxihhjOI2lX0BirqHbg"
-            }
-        ]
-    }
-
-.. _changes/filter/selector/missing:
-
-Missing selector
-################
-
-If the selector object is missing from the request body,
-the error message is similar to the following example:
-
-.. code-block:: json
-
-   {
-      "error": "bad request",
-      "reason": "Selector must be specified in POST payload"
-   }
-
-.. _changes/filter/selector/invalidjson:
-
-Not a valid JSON object
-#######################
-
-If the selector object is not a well-formed JSON object,
-the error message is similar to the following example:
-
-.. code-block:: json
-
-   {
-      "error": "bad request",
-      "reason": "Selector error: expected a JSON object"
-   }
-
-.. _changes/filter/selector/invalidselector:
-
-Not a valid selector
-####################
-
-If the selector object does not contain a valid selection expression,
-the error message is similar to the following example:
-
-.. code-block:: json
-
-   {
-      "error": "bad request",
-      "reason": "Selector error: expected a JSON object"
-   }
-
-.. _changes/filter/design:
-
-_design
--------
-
-The ``_design`` filter accepts only changes for any design document within the
-requested database.
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_changes?filter=_design HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Tue, 06 Sep 2016 12:55:12 GMT
-    ETag: "ARIHFWL3I7PIS0SPVTFU6TLR2"
-    Server: CouchDB (Erlang OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "last_seq": "11-g1AAAAIreJyVkEEKwjAQRUOrqCuPoCeQZGIaXdmbaNIk1FLjyrXeRG-iN9Gb1LQRaimFlsAEJnkP_s8RQtM0VGhuz0qTmABfYXdI7h4CgeSiKIosDUVwcotJIpQSOmp_71TIpZty97OgymJAU8G5QrOLVdocrVbdfFzy-wYvcbLVEvrxh5K_NlJggIhSNiCFHbmJbu5yonttMoneYD6kD296eNOzzoRNBNqse2Xyjpd3vP96AcYNTQY4Pt5RdTOuHIwCY5S0qewLwY6OaA",
-        "pending": 0,
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "10-304cae84fd862832ea9814f02920d4b2"
-                    }
-                ],
-                "id": "_design/ingredients",
-                "seq": "8-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D666H6GcH6DYyMzIyNTUnwRR4LkGRoAFJAg-ZnJTIQULkAonI_ws0GaWZGqWkmRLkZYsYBiBn3Cdv2AKIS7ENWsG2mxkampsaGmLqyAOYpgEo"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "123-6f7c1b7c97a9e4f0d22bdf130e8fd817"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/cookbook",
-                "seq": "9-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D661F8YWBkZGZsbEqCL_JYgCRDA5ACGjQ_K5GBgMoFEJX7EW42SDMzSk0zIcrNEDMOQMy4T9i2BxCVYB-ygm0zNTYyNTU2xNSVBQDnK4BL"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "6-5b8a52c22580e922e792047cff3618f3"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/meta",
-                "seq": "11-g1AAAAIReJyVkE0OgjAQRiegUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloQhO7mGSCKWEjtrtnQq5dFXufhaoLIZoKjhXMLtYpc3RatXNxyW_b_ASJVstST_-UPLXRgpESEQpG5DCjlyFm7uc6F6bTKI3iA_Zhzc9vOlZZ0ImItqse2Xyjpd3vDMBfzo_vrPawLiaxihhjOI2lX0BirqHbg"
-            }
-        ]
-    }
-
-.. _changes/filter/view:
-
-_view
------
-
-.. versionadded:: 1.2
-
-The special filter ``_view`` allows to use existing
-:ref:`map function <mapfun>` as the :ref:`filter <filterfun>`. If the map
-function emits anything for the processed document it counts as accepted and
-the changes event emits to the feed. For most use-practice cases `filter`
-functions are very similar to `map` ones, so this feature helps to reduce
-amount of duplicated code.
-
-.. warning::
-    While :ref:`map functions <mapfun>` doesn't process the design documents,
-    using ``_view`` filter forces them to do this. You need to be sure, that
-    they are ready to handle documents with *alien* structure without panic.
-
-.. note::
-    Using ``_view`` filter doesn't queries the view index files, so you cannot
-    use common :ref:`view query parameters <api/ddoc/view>` to additionally
-    filter the changes feed by index key. Also, CouchDB doesn't returns
-    the result instantly as it does for views - it really uses the specified
-    map function as filter.
-
-    Moreover, you cannot make such filters dynamic e.g. process the request
-    query parameters or handle the :ref:`userctx_object` - the map function is
-    only operates with the document.
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_changes?filter=_view&view=ingredients/by_recipe HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Tue, 06 Sep 2016 12:57:56 GMT
-    ETag: "ARIHFWL3I7PIS0SPVTFU6TLR2"
-    Server: CouchDB (Erlang OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "last_seq": "11-g1AAAAIreJyVkEEKwjAQRUOrqCuPoCeQZGIaXdmbaNIk1FLjyrXeRG-iN9Gb1LQRaimFlsAEJnkP_s8RQtM0VGhuz0qTmABfYXdI7h4CgeSiKIosDUVwcotJIpQSOmp_71TIpZty97OgymJAU8G5QrOLVdocrVbdfFzy-wYvcbLVEvrxh5K_NlJggIhSNiCFHbmJbu5yonttMoneYD6kD296eNOzzoRNBNqse2Xyjpd3vP96AcYNTQY4Pt5RdTOuHIwCY5S0qewLwY6OaA",
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "13-bcb9d6388b60fd1e960d9ec4e8e3f29e"
-                    }
-                ],
-                "id": "SpaghettiWithMeatballs",
-                "seq": "11-g1AAAAIReJyVkE0OgjAQRiegUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloQhO7mGSCKWEjtrtnQq5dFXufhaoLIZoKjhXMLtYpc3RatXNxyW_b_ASJVstST_-UPLXRgpESEQpG5DCjlyFm7uc6F6bTKI3iA_Zhzc9vOlZZ0ImItqse2Xyjpd3vDMBfzo_vrPawLiaxihhjOI2lX0BirqHbg"
-            }
-        ]
-    }
diff --git a/src/api/database/common.rst b/src/api/database/common.rst
deleted file mode 100644
index 4831ab7..0000000
--- a/src/api/database/common.rst
+++ /dev/null
@@ -1,468 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db:
-
-=======
-``/db``
-=======
-
-.. http:head:: /{db}
-    :synopsis: Checks the database existence
-
-    Returns the HTTP Headers containing a minimal amount of information
-    about the specified database. Since the response body is empty, using the
-    HEAD method is a lightweight way to check if the database exists already or
-    not.
-
-    :param db: Database name
-    :code 200: Database exists
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        HEAD /test HTTP/1.1
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 01:27:41 GMT
-        Server: CouchDB (Erlang/OTP)
-
-.. http:get:: /{db}
-    :synopsis: Returns the database information
-
-    Gets information about the specified database.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json number cluster.n: Replicas. The number of copies of every document.
-    :>json number cluster.q: Shards. The number of range partitions.
-    :>json number cluster.r: Read quorum. The number of consistent copies
-      of a document that need to be read before a successful reply.
-    :>json number cluster.w: Write quorum. The number of copies of a document
-      that need to be written before a successful reply.
-    :>json boolean compact_running: Set to ``true`` if the database compaction
-      routine is operating on this database.
-    :>json string db_name: The name of the database.
-    :>json number disk_format_version: The version of the physical format used
-      for the data when it is stored on disk.
-    :>json number doc_count: A count of the documents in the specified
-      database.
-    :>json number doc_del_count: Number of deleted documents
-    :>json string instance_start_time: Always ``"0"``. (Returned for legacy
-      reasons.)
-    :>json string purge_seq: An opaque string that describes the purge state
-      of the database. Do not rely on this string for counting the number
-      of purge operations.
-    :>json number sizes.active: The size of live data inside the database, in
-      bytes.
-    :>json number sizes.external: The uncompressed size of database contents
-      in bytes.
-    :>json number sizes.file: The size of the database file on disk in bytes.
-      Views indexes are not included in the calculation.
-    :>json string update_seq: An opaque string that describes the state
-      of the database. Do not rely on this string for counting the number
-      of updates.
-    :>json boolean props.partitioned: (optional) If present and true, this
-      indicates that the database is partitioned.
-    :code 200: Request completed successfully
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /receipts HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 258
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 01:38:57 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "cluster": {
-                "n": 3,
-                "q": 8,
-                "r": 2,
-                "w": 2
-            },
-            "compact_running": false,
-            "db_name": "receipts",
-            "disk_format_version": 6,
-            "doc_count": 6146,
-            "doc_del_count": 64637,
-            "instance_start_time": "0",
-            "props": {},
-            "purge_seq": 0,
-            "sizes": {
-                "active": 65031503,
-                "external": 66982448,
-                "file": 137433211
-            },
-            "update_seq": "292786-g1AAAAF..."
-        }
-
-.. http:put:: /{db}
-    :synopsis: Creates a new database
-
-    Creates a new database. The database name ``{db}`` must be composed by
-    following next rules:
-
-    -  Name must begin with a lowercase letter (``a-z``)
-
-    -  Lowercase characters (``a-z``)
-
-    -  Digits (``0-9``)
-
-    -  Any of the characters ``_``, ``$``, ``(``, ``)``, ``+``, ``-``, and
-       ``/``.
-
-    If you're familiar with `Regular Expressions`_, the rules above could be
-    written as ``^[a-z][a-z0-9_$()+/-]*$``.
-
-    :param db: Database name
-    :query integer q: Shards, aka the number of range partitions. Default is
-      8, unless overridden in the :config:option:`cluster config <cluster/q>`.
-    :query integer n: Replicas. The number of copies of the database in the
-      cluster. The default is 3, unless overridden in the
-      :config:option:`cluster config <cluster/n>` .
-    :query boolean partitioned: Whether to create a partitioned database.
-      Default is false.
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header Location: Database URI location
-    :>json boolean ok: Operation status. Available in case of success
-    :>json string error: Error type. Available if response code is ``4xx``
-    :>json string reason: Error description. Available if response code is
-      ``4xx``
-    :code 201: Database created successfully (quorum is met)
-    :code 202: Accepted (at least by one node)
-    :code 400: Invalid database name
-    :code 401: CouchDB Server Administrator privileges required
-    :code 412: Database already exists
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:01:45 GMT
-        Location: http://localhost:5984/db
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-    If we repeat the same request to CouchDB, it will response with :code:`412`
-    since the database already exists:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 412 Precondition Failed
-        Cache-Control: must-revalidate
-        Content-Length: 95
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:01:16 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "error": "file_exists",
-            "reason": "The database could not be created, the file already exists."
-        }
-
-    If an invalid database name is supplied, CouchDB returns response with
-    :code:`400`:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /_db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Request**:
-
-    .. code-block:: http
-
-        HTTP/1.1 400 Bad Request
-        Cache-Control: must-revalidate
-        Content-Length: 194
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:02:10 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "error": "illegal_database_name",
-            "reason": "Name: '_db'. Only lowercase characters (a-z), digits (0-9), and any of the characters _, $, (, ), +, -, and / are allowed. Must begin with a letter."
-        }
-
-.. http:delete:: /{db}
-    :synopsis: Deletes an existing database
-
-    Deletes the specified database, and all the documents and attachments
-    contained within it.
-
-    .. note::
-        To avoid deleting a database, CouchDB will respond with the HTTP status
-        code 400 when the request URL includes a ?rev= parameter. This suggests
-        that one wants to delete a document but forgot to add the document id
-        to the URL.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 200: Database removed successfully (quorum is met and database is deleted by at least one node)
-    :code 202: Accepted (deleted by at least one of the nodes, quorum is not met yet)
-    :code 400: Invalid database name or forgotten document id by accident
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Database doesn't exist or invalid database name
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
... 38199 lines suppressed ...