You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by va...@apache.org on 2019/04/03 15:40:58 UTC

[couchdb-documentation] branch master updated: Add _reshard HTTP API reference documentation (#404)

This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git


The following commit(s) were added to refs/heads/master by this push:
     new a35ea11  Add _reshard HTTP API reference documentation (#404)
a35ea11 is described below

commit a35ea115a71fd27a21245a6f14becf43adcfc8a7
Author: Nick Vatamaniuc <ni...@users.noreply.github.com>
AuthorDate: Wed Apr 3 11:40:53 2019 -0400

    Add _reshard HTTP API reference documentation (#404)
    
    Related to the main PR: https://github.com/apache/couchdb/pull/1972
---
 src/api/server/common.rst | 477 ++++++++++++++++++++++++++++++++++++++++++++++
 src/cluster/sharding.rst  | 212 +++++++++++++++++++--
 src/config/index.rst      |   1 +
 src/config/resharding.rst |  79 ++++++++
 4 files changed, 756 insertions(+), 13 deletions(-)

diff --git a/src/api/server/common.rst b/src/api/server/common.rst
index 752c522..cc2e0aa 100644
--- a/src/api/server/common.rst
+++ b/src/api/server/common.rst
@@ -1675,3 +1675,480 @@ You can verify the change by obtaining a list of UUIDs:
     :>header Content-Type: :mimetype:`image/x-icon`
     :code 200: Request completed successfully
     :code 404: The requested content could not be found
+
+.. _api/server/reshard:
+
+=============
+``/_reshard``
+=============
+
+.. versionadded:: 2.4
+
+.. http:get:: /_reshard
+    :synopsis: Retrieve summary information about resharding on the cluster
+
+    Returns a count of completed, failed, running, stopped, and total jobs
+    along with the state of resharding on the cluster.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :>json string state: ``stopped`` or ``running``
+    :>json string state_reason: ``null`` or string describing additional
+                                information or reason associated with the state
+    :>json number completed: Count of completed resharding jobs
+    :>json number failed: Count of failed resharding jobs
+    :>json number running: Count of running resharding jobs
+    :>json number stopped: Count of stopped resharding jobs
+    :>json number total: Total count of resharding jobs
+
+    :code 200: Request completed successfully
+    :code 401: CouchDB Server Administrator privileges required
+
+    **Request**:
+
+    .. code-block:: http
+
+        GET /_reshard HTTP/1.1
+        Accept: application/json
+        Host: localhost:5984
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "completed": 21,
+            "failed": 0,
+            "running": 3,
+            "state": "running",
+            "state_reason": null,
+            "stopped": 0,
+            "total": 24
+        }
+
+.. http:get:: /_reshard/state
+    :synopsis: Retrieve the state of resharding on the cluster
+
+    Returns the resharding state and optional information about the state.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :>json string state: ``stopped`` or ``running``
+    :>json string state_reason: Additional  information  or  reason  associated
+                                with the state
+
+    :code 200: Request completed successfully
+    :code 401: CouchDB Server Administrator privileges required
+
+    **Request**:
+
+    .. code-block:: http
+
+        GET /_reshard/state HTTP/1.1
+        Accept: application/json
+        Host: localhost:5984
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "reason": null,
+            "state": "running"
+        }
+
+.. http:put:: /_reshard/state
+    :synopsis: Change resharding state on the cluster
+
+    Change the resharding state on the cluster. The states are
+    ``stopped`` or ``running``. This starts and stops global resharding on all
+    the nodes of the cluster. If there are any running jobs, they
+    will be stopped when the state changes to ``stopped``. When the state
+    changes back to ``running`` those job will continue running.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :<json string state: ``stopped`` or ``running``
+    :<json string state_reason: Optional string describing additional
+                                information or reason associated with the state
+
+    :>json boolean ok: ``true``
+
+    :code 200: Request completed successfully
+    :code 400: Invalid request. Could be a bad or missing state name.
+    :code 401: CouchDB Server Administrator privileges required
+
+    **Request**:
+
+    .. code-block:: http
+
+        PUT /_reshard/state HTTP/1.1
+        Accept: application/json
+        Host: localhost:5984
+
+        {
+            "state": "stopped",
+            "reason": "Rebalancing in progress"
+        }
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "ok": true
+        }
+
+.. http:get:: /_reshard/jobs
+    :synopsis: Retrieve information about all the resharding jobs on the cluster
+
+    .. note:: The shape of the response and the ``total_rows`` and ``offset``
+              field in particular are meant to be consistent with the
+              ``_scheduler/jobs`` endpoint.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :>json list jobs: Array of json objects, one for each resharding job. For
+                      the fields of each job see the /_reshard/job/{jobid}
+                      endpoint.
+    :>json number offset: Offset in the list of jobs object. Currently
+                          hard-coded at ``0``.
+    :>json number total_rows: Total number of resharding jobs on the cluster.
+
+    :code 200: Request completed successfully
+    :code 401: CouchDB Server Administrator privileges required
+
+    **Request**:
+
+    .. code-block:: http
+
+        GET /_reshard/jobs HTTP/1.1
+        Accept: application/json
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "jobs": [
+                {
+                    "history": [
+                        {
+                            "detail": null,
+                            "timestamp": "2019-03-28T15:28:02Z",
+                            "type": "new"
+                        },
+                        {
+                            "detail": "initial_copy",
+                            "timestamp": "2019-03-28T15:28:02Z",
+                            "type": "running"
+                        },
+                        ...
+                    ],
+                    "id": "001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a",
+                    "job_state": "completed",
+                    "node": "node1@127.0.0.1",
+                    "source": "shards/00000000-1fffffff/d1.1553786862",
+                    "split_state": "completed",
+                    "start_time": "2019-03-28T15:28:02Z",
+                    "state_info": {},
+                    "target": [
+                        "shards/00000000-0fffffff/d1.1553786862",
+                        "shards/10000000-1fffffff/d1.1553786862"
+                    ],
+                    "type": "split",
+                    "update_time": "2019-03-28T15:28:08Z"
+                },
+                ...
+            ],
+            "offset": 0,
+            "total_rows": 24
+        }
+
+.. http:get:: /_reshard/jobs/{jobid}
+    :synopsis: Retrieve information about a particular resharding job
+
+    Get information about the resharding job identified by ``jobid``.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :>json string id: Job ID.
+    :>json string type: Currently only ``split`` is implemented.
+    :>json string job_state: The running state of the job. Could be one of
+                             ``new``, ``running``, ``stopped``, ``completed``
+                             or ``failed``.
+    :>json string split_state: State detail specific to shard splitting. It
+                               indicates how far has shard splitting
+                               progressed, and can be one of ``new``,
+                               ``initial_copy``, ``topoff1``,
+                               ``build_indices``, ``topoff2``,
+                               ``copy_local_docs``, ``update_shardmap``,
+                               ``wait_source_close``, ``topoff3``,
+                               ``source_delete`` or ``completed``.
+    :>json object state_info: Optional additional info associated with the
+                              current state.
+    :>json string source: For ``split`` jobs this will be the source shard.
+    :>json list target: For ``split`` jobs this will be a list of two or more
+                        target shards.
+    :>json list history: List of json objects recording a job's state
+                         transition history.
+
+    :code 200: Request completed successfully
+    :code 401: CouchDB Server Administrator privileges required
+
+    **Request**:
+
+    .. code-block:: http
+
+        GET /_reshard/jobs/001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a HTTP/1.1
+        Accept: application/json
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+
+            "id": "001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a",
+            "job_state": "completed",
+            "node": "node1@127.0.0.1",
+            "source": "shards/00000000-1fffffff/d1.1553786862",
+            "split_state": "completed",
+            "start_time": "2019-03-28T15:28:02Z",
+            "state_info": {},
+            "target": [
+                "shards/00000000-0fffffff/d1.1553786862",
+                "shards/10000000-1fffffff/d1.1553786862"
+            ],
+            "type": "split",
+            "update_time": "2019-03-28T15:28:08Z",
+            "history": [
+                {
+                    "detail": null,
+                    "timestamp": "2019-03-28T15:28:02Z",
+                    "type": "new"
+                },
+                {
+                    "detail": "initial_copy",
+                    "timestamp": "2019-03-28T15:28:02Z",
+                    "type": "running"
+                },
+                ...
+            ]
+        }
+
+.. http:post:: /_reshard/jobs/{jobid}
+    :synopsis: Create one or more resharding jobs
+
+    Depending on what fields are specified in the request, one or more
+    resharding jobs will be created. The response is a json array of results.
+    Each result object represents a single resharding job for a particular node
+    and range. Some of the responses could be successful and some could fail.
+    Successful results will have the ``"ok": true`` key and and value, and
+    failed jobs will have the ``"error": "{error_message}"`` key and value.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :<json string type: Type of job. Currently only ``"split"`` is accepted.
+
+    :<json string db: Database to split. This is mutually exclusive with the
+                      ``"shard``" field.
+
+    :<json string node: Split shards on a particular node. This is an optional
+                        parameter. The value should be one of the nodes
+                        returned from the ``_membership`` endpoint.
+
+    :<json string range: Split shards copies in the given range. The range
+                         format is ``hhhhhhhh-hhhhhhhh`` where ``h`` is a
+                         hexadecimal digit. This format is used since this is
+                         how the ranges are represented in the file system.
+                         This is parameter is optional and is mutually
+                         exclusive with the ``"shard"`` field.
+
+    :<json string shard: Split a particular shard. The shard should be
+                         specified as ``"shards/{range}/{db}.{suffix}"``. Where
+                         ``range`` has the ``hhhhhhhh-hhhhhhhh`` format, ``db``
+                         is the database name, and ``suffix`` is the shard
+                         (timestamp) creation suffix.
+
+    :>json boolean ok: ``true`` if job created successfully.
+
+    :<json string error: Error message if a job could be not be created.
+
+    :<json string node: Cluster node where the job was created and is running.
+
+    :code 201: One or more jobs were successfully created
+    :code 400: Invalid request. Parameter validation might have failed.
+    :code 401: CouchDB Server Administrator privileges required
+    :code 404: Db, node, range or shard was not found
+
+    **Request**:
+
+    .. code-block:: http
+
+        POST /_reshard/jobs HTTP/1.1
+        Accept: application/json
+        Content-Type: application/json
+
+       {
+           "db": "db3",
+           "range": "80000000-ffffffff",
+           "type": "split"
+       }
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 201 Created
+        Content-Type: application/json
+
+        [
+            {
+                "id": "001-30d7848a6feeb826d5e3ea5bb7773d672af226fd34fd84a8fb1ca736285df557",
+                "node": "node1@127.0.0.1",
+                "ok": true,
+                "shard": "shards/80000000-ffffffff/db3.1554148353"
+            },
+            {
+                "id": "001-c2d734360b4cb3ff8b3feaccb2d787bf81ce2e773489eddd985ddd01d9de8e01",
+                "node": "node2@127.0.0.1",
+                "ok": true,
+                "shard": "shards/80000000-ffffffff/db3.1554148353"
+            }
+        ]
+
+.. http:delete:: /_reshard/jobs/{jobid}
+    :synopsis: Remove a resharding job
+
+    If the job is running, stop the job and then remove it.
+
+    :>json boolean ok: ``true`` if the job was removed successfully.
+
+    :code 200: The job was removed successfully
+    :code 401: CouchDB Server Administrator privileges required
+    :code 404: The job was not found
+
+    **Request**:
+
+    .. code-block:: http
+
+        DELETE /_reshard/jobs/001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a HTTP/1.1
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "ok": true
+        }
+
+.. http:get:: /_reshard/jobs/{jobid}/state
+    :synopsis: Retrieve the state of a single resharding job
+
+    Returns the running state of a resharding job identified by ``jobid``.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :<json string state: One of ``new``, ``running``, ``stopped``,
+                         ``completed`` or ``failed``.
+
+    :<json string state_reason: Additional information associated with the
+                                state
+
+    :code 200: Request completed successfully
+    :code 401: CouchDB Server Administrator privileges required
+    :code 404: The job was not found
+
+    **Request**:
+
+    .. code-block:: http
+
+        GET /_reshard/jobs/001-b3da04f969bbd682faaab5a6c373705cbcca23f732c386bb1a608cfbcfe9faff/state HTTP/1.1
+        Accept: application/json
+        Host: localhost:5984
+
+    **Response**:
+
+    .. code-block:: http
+
+        HTTP/1.1 200 OK
+        Content-Type: application/json
+
+        {
+            "reason": null,
+            "state": "running"
+        }
+
+.. http:put:: /_reshard/jobs/{jobid}/state
+    :synopsis: Change the state of a resharding job
+
+    Change the state of a particular resharding job identified by ``jobid``.
+    The state can be changed from ``stopped`` to ``running`` or from
+    ``running`` to ``stopped``. If an individual job is ``stopped`` via this
+    API it will stay ``stopped`` even after the global resharding state is
+    toggled from ``stopped`` to ``running``. If the job is already
+    ``completed`` its state will stay ``completed``.
+
+    :<header Accept: - :mimetype:`application/json`
+    :>header Content-Type: - :mimetype:`application/json`
+
+    :<json string state: ``stopped`` or ``running``
+    :<json string state_reason: Optional string describing additional
+                                information or reason associated with the state
+
+    :>json boolean ok: ``true``
+
+    :code 200: Request completed successfully
+    :code 400: Invalid request. Could be a bad state name, for example.
+    :code 401: CouchDB Server Administrator privileges required
+    :code 404: The job was not found
+
+    **Request**:
+
+    .. code-block:: http
+
+        PUT /_reshard/state/001-b3da04f969bbd682faaab5a6c373705cbcca23f732c386bb1a608cfbcfe9faff/state HTTP/1.1
+        Accept: application/json
+        Host: localhost:5984
+
+        {
+            "state": "stopped",
+            "reason": "Rebalancing in progress"
+        }
+
+    **Response**:
+
+    .. code-block:: http
+
+       HTTP/1.1 200 OK
+       Content-Type: application/json
+
+       {
+            "ok": true
+       }
diff --git a/src/cluster/sharding.rst b/src/cluster/sharding.rst
index 90a8fcb..b5cdec3 100644
--- a/src/cluster/sharding.rst
+++ b/src/cluster/sharding.rst
@@ -247,6 +247,10 @@ documents is mapped.
 Moving a shard
 --------------
 
+When moving shards or performing other shard manipulations on the cluster, it
+is advisable to stop all resharding jobs on the cluster. See
+:ref:`cluster/sharding/stop_resharding` for more details.
+
 This section describes how to manually place and replace shards. These
 activities are critical steps when you determine your cluster is too big
 or too small, and want to resize it successfully, or you have noticed
@@ -666,25 +670,207 @@ cluster do not host any replicas for newly created databases, by giving
 them a zone attribute that does not appear in the ``[cluster]``
 placement string.
 
-Resharding a database to a new q value
---------------------------------------
+.. _cluster/sharding/splitting_shards:
+
+Splitting Shards
+----------------
+
+The :ref:`api/server/reshard` is an HTTP API for shard manipulation. Currently
+it only supports shard splitting. To perform shard merging, refer to the manual
+process outlined in the :ref:`cluster/sharding/merging_shards` section.
+
+The main way to interact with :ref:`api/server/reshard` is to create resharding
+jobs, monitor those jobs, wait until they complete, remove them, post new jobs,
+and so on. What follows are a few steps one might take to use this API to split
+shards.
+
+At first, it's a good idea to call ``GET /_reshard`` to see a summary of
+resharding on the cluster.
+
+.. code-block:: bash
+
+   $ curl -s $COUCH_URL:5984/_reshard | jq .
+   {
+     "state": "running",
+     "state_reason": null,
+     "completed": 3,
+     "failed": 0,
+     "running": 0,
+     "stopped": 0,
+     "total": 3
+   }
+
+Two important things to pay attention to are the total number of jobs and the state.
+
+The ``state`` field indicates the state of resharding on the cluster. Normally
+it would be ``running``, however, another user could have disabled resharding
+temporarily. Then, the state would be ``stopped`` and hopefully, there would be
+a reason or a comment in the value of the ``state_reason`` field. See
+:ref:`cluster/sharding/stop_resharding` for more details.
+
+The ``total`` number of jobs is important to keep an eye on because there is a
+maximum number of resharding jobs per node, and creating new jobs after the
+limit has been reached will result in an error. Before staring new jobs it's a
+good idea to remove already completed jobs. See :ref:`reshard configuration
+section <config/reshard>` for the default value of ``max_jobs`` parameter and
+how to adjust if needed.
+
+For example, if the jobs have completed, to remove all the jobs run:
+
+.. code-block:: bash
+
+    $ curl -s $COUCH_URL:5984/_reshard/jobs | jq -r '.jobs[].id' |\
+      while read -r jobid; do\
+          curl -s -XDELETE $COUCH_URL:5984/_reshard/jobs/$jobid\
+      done
+
+Then it's a good idea to see what the db shard map looks like.
+
+.. code-block:: bash
+
+    $ curl -s $COUCH_URL:5984/db1/_shards | jq '.'
+    {
+      "shards": {
+        "00000000-7fffffff": [
+          "node1@127.0.0.1",
+          "node2@127.0.0.1",
+          "node3@127.0.0.1"
+        ],
+        "80000000-ffffffff": [
+          "node1@127.0.0.1",
+          "node2@127.0.0.1",
+          "node3@127.0.0.1"
+        ]
+      }
+    }
+
+In this example we'll split all the copies of the ``00000000-7fffffff`` range.
+The API allows a combination of parameters such as: splitting all
+the ranges on all the nodes, all the ranges on just one node, or one particular
+range on one particular node. These are specified via the ``db``,
+``node`` and ``range`` job parameters.
+
+To split all the copies of ``00000000-7fffffff`` we issue a request like this:
+
+.. code-block:: bash
+
+    $ curl -s -H "Content-type: application/json" -XPOST $COUCH_URL:5984/_reshard/jobs \
+      -d '{"type": "split", "db":"db1", "range":"00000000-7fffffff"}' | jq '.'
+    [
+      {
+        "ok": true,
+        "id": "001-ef512cfb502a1c6079fe17e9dfd5d6a2befcc694a146de468b1ba5339ba1d134",
+        "node": "node1@127.0.0.1",
+        "shard": "shards/00000000-7fffffff/db1.1554242778"
+      },
+      {
+        "ok": true,
+        "id": "001-cec63704a7b33c6da8263211db9a5c74a1cb585d1b1a24eb946483e2075739ca",
+        "node": "node2@127.0.0.1",
+        "shard": "shards/00000000-7fffffff/db1.1554242778"
+      },
+      {
+        "ok": true,
+        "id": "001-fc72090c006d9b059d4acd99e3be9bb73e986d60ca3edede3cb74cc01ccd1456",
+        "node": "node3@127.0.0.1",
+        "shard": "shards/00000000-7fffffff/db1.1554242778"
+      }
+    ]
+
+The request returned three jobs, one job for each of the three copies.
+
+To check progress of these jobs use ``GET /_reshard/jobs`` or ``GET
+/_reshard/jobs/{jobid}``.
+
+Eventually, these jobs should complete and the shard map should look like this:
+
+.. code-block:: bash
+
+    $ curl -s $COUCH_URL:5984/db1/_shards | jq '.'
+    {
+      "shards": {
+        "00000000-3fffffff": [
+          "node1@127.0.0.1",
+          "node2@127.0.0.1",
+          "node3@127.0.0.1"
+        ],
+        "40000000-7fffffff": [
+          "node1@127.0.0.1",
+          "node2@127.0.0.1",
+          "node3@127.0.0.1"
+        ],
+        "80000000-ffffffff": [
+          "node1@127.0.0.1",
+          "node2@127.0.0.1",
+          "node3@127.0.0.1"
+        ]
+      }
+    }
+
+.. _cluster/sharding/stop_resharding:
+
+Stopping Resharding Jobs
+------------------------
+
+Resharding at the cluster level could be stopped and then restarted. This can
+be helpful to allow external tools which manipulate the shard map to avoid
+interfering with resharding jobs. To stop all resharding jobs on a cluster
+issue a ``PUT`` to ``/_reshard/state`` endpoint with the ``"state": "stopped"``
+key and value. You can also specify an optional note or reason for stopping.
+
+For example:
+
+.. code-block:: bash
+
+    $ curl -s -H "Content-type: application/json" \
+      -XPUT $COUCH_URL:5984/_reshard/state \
+      -d '{"state": "stopped", "reason":"Moving some shards"}'
+    {"ok": true}
+
+This state will then be reflected in the global summary:
+
+.. code-block:: bash
+
+   $ curl -s $COUCH_URL:5984/_reshard | jq .
+   {
+     "state": "stopped",
+     "state_reason": "Moving some shards",
+     "completed": 74,
+     "failed": 0,
+     "running": 0,
+     "stopped": 0,
+     "total": 74
+   }
+
+To restart, issue a ``PUT`` request like above with ``running`` as the state.
+That should resume all the shard splitting jobs since their last checkpoint.
+
+See the API reference for more details: :ref:`api/server/reshard`.
+
+.. _cluster/sharding/merging_shards:
+
+Merging Shards
+--------------
 
-The ``q`` value for a database can only be set when the database is
-created, precluding live resharding. Instead, to reshard a database, it
-must be regenerated. Here are the steps:
+The ``q`` value for a database can be set when the database is created or it
+can be increased later by splitting some of the shards
+:ref:`cluster/sharding/splitting_shards`. In order to decrease ``q`` and merge
+some shards together, the database must be regenerated. Here are the steps:
 
-1. Create a temporary database with the desired shard settings, by
+1. If there are running shard splitting jobs on the cluster, stop them via the
+   HTTP API :ref:`cluster/sharding/stop_resharding`.
+2. Create a temporary database with the desired shard settings, by
    specifying the q value as a query parameter during the PUT
    operation.
-2. Stop clients accessing the database.
-3. Replicate the primary database to the temporary one. Multiple
+3. Stop clients accessing the database.
+4. Replicate the primary database to the temporary one. Multiple
    replications may be required if the primary database is under
    active use.
-4. Delete the primary database. **Make sure nobody is using it!**
-5. Recreate the primary database with the desired shard settings.
-6. Clients can now access the database again.
-7. Replicate the temporary back to the primary.
-8. Delete the temporary database.
+5. Delete the primary database. **Make sure nobody is using it!**
+6. Recreate the primary database with the desired shard settings.
+7. Clients can now access the database again.
+8. Replicate the temporary back to the primary.
+9. Delete the temporary database.
 
 Once all steps have completed, the database can be used again. The
 cluster will create and distribute its shards according to placement
diff --git a/src/config/index.rst b/src/config/index.rst
index 492f23d..eebdc17 100644
--- a/src/config/index.rst
+++ b/src/config/index.rst
@@ -31,3 +31,4 @@ Configuration
     query-servers
     services
     misc
+    resharding
diff --git a/src/config/resharding.rst b/src/config/resharding.rst
new file mode 100644
index 0000000..de91011
--- /dev/null
+++ b/src/config/resharding.rst
@@ -0,0 +1,79 @@
+.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
+.. use this file except in compliance with the License. You may obtain a copy of
+.. the License at
+..
+..   http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing, software
+.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+.. License for the specific language governing permissions and limitations under
+.. the License.
+
+.. highlight:: ini
+
+==========
+Resharding
+==========
+
+.. _config/reshard:
+
+Resharding Configuration
+========================
+
+.. config:section:: resharding :: Resharding Configuration
+
+    .. config:option:: max_jobs
+
+        Maximum number of resharding jobs per cluster node. This includes
+        completed, failed, and running jobs. If the job appears in the
+        _reshard/jobs HTTP API results it will be counted towards the limit.
+        When more than ``max_jobs`` jobs have been created, subsequent requests
+        will start to fail with the ``max_jobs_exceeded`` error::
+
+             [reshard]
+             max_jobs = 25
+
+    .. config:option:: max_retries
+
+        How many times to retry shard splitting steps if they fail. For
+        example, if indexing or topping off fails, it will be retried up to
+        this many times before the whole resharding job fails::
+
+             [reshard]
+             max_retries = 1
+
+    .. config:option:: retry_interval_sec
+
+        How long to wait between subsequent retries::
+
+             [reshard]
+             retry_interval_sec = 10
+
+    .. config:option:: delete_source
+
+        Indicates if the source shard should be deleted after resharding has
+        finished. By default, it is ``true`` as that would recover the space
+        utilized by the shard. When debugging or when extra safety is required,
+        this can be switched to ``false``::
+
+             [reshard]
+             delete_source = true
+
+    .. config:option:: update_shard_map_timeout_sec
+
+        How many seconds to wait for the shard map update operation to
+        complete. If there is a large number of shard db changes waiting to
+        finish replicating, it might be beneficial to increase this timeout::
+
+            [reshard]
+            update_shard_map_timeout_sec = 60
+
+    .. config:option:: source_close_timeout_sec
+
+        How many seconds to wait for the source shard to close. "Close" in this
+        context means that client requests which keep the database open have
+        all finished::
+
+            [reshard]
+            source_close_timeout_sec = 600