You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by wo...@apache.org on 2019/01/19 04:57:10 UTC
[couchdb-documentation] branch shard-level-docs updated: Improve
placement docs, closes #374 (#376)
This is an automated email from the ASF dual-hosted git repository.
wohali pushed a commit to branch shard-level-docs
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git
The following commit(s) were added to refs/heads/shard-level-docs by this push:
new 2a7418c Improve placement docs, closes #374 (#376)
new 306d961 Merge branch 'master' into shard-level-docs
2a7418c is described below
commit 2a7418cfe01e43bbfa5e9cb6fa3725096fb9a4be
Author: Joan Touzet <wo...@users.noreply.github.com>
AuthorDate: Fri Jan 18 23:56:50 2019 -0500
Improve placement docs, closes #374 (#376)
Closes #374 .
---
src/cluster/databases.rst | 5 +++++
src/cluster/sharding.rst | 8 ++++++++
src/config/cluster.rst | 5 +++++
3 files changed, 18 insertions(+)
diff --git a/src/cluster/databases.rst b/src/cluster/databases.rst
index f008eb6..09876d0 100644
--- a/src/cluster/databases.rst
+++ b/src/cluster/databases.rst
@@ -51,6 +51,11 @@ In BigCouch, the predecessor to CouchDB 2.0's clustering functionality, there
was the concept of zones. CouchDB 2.0 carries this forward with cluster
placement rules.
+.. warning::
+
+ Use of the ``placement`` argument will **override** the standard
+ logic for shard replica cardinality (specified by ``[cluster] n``.)
+
First, each node must be labeled with a zone attribute. This defines which
zone each node is in. You do this by editing the node's document in the
``/nodes`` database, which is accessed through the "back-door" (5986) port.
diff --git a/src/cluster/sharding.rst b/src/cluster/sharding.rst
index aeb8d02..cc7268e 100644
--- a/src/cluster/sharding.rst
+++ b/src/cluster/sharding.rst
@@ -468,6 +468,11 @@ Specifying database placement
You can configure CouchDB to put shard replicas on certain nodes at
database creation time using placement rules.
+.. warning::
+
+ Use of the ``placement`` option will **override** the ``n`` option,
+ both in the ``.ini`` file as well as when specified in a ``URL``.
+
First, each node must be labeled with a zone attribute. This defines
which zone each node is in. You do this by editing the node’s document
in the ``/_nodes`` database, which is accessed through the node-local
@@ -509,6 +514,9 @@ when the database is created, using the same syntax as the ini file:
curl -X PUT $COUCH_URL:5984/<dbname>?zone=<zone>
+The ``placement`` argument may also be specified. Note that this *will*
+override the logic that determines the number of created replicas!
+
Note that you can also use this system to ensure certain nodes in the
cluster do not host any replicas for newly created databases, by giving
them a zone attribute that does not appear in the ``[cluster]``
diff --git a/src/config/cluster.rst b/src/config/cluster.rst
index 969f7e2..a7d605f 100644
--- a/src/config/cluster.rst
+++ b/src/config/cluster.rst
@@ -56,6 +56,11 @@ Cluster Options
.. config:option:: placement
+ .. warning::
+
+ Use of this option will **override** the ``n`` option for replica
+ cardinality. Use with care.
+
Sets the cluster-wide replica placement policy when creating new
databases. The value must be a comma-delimited list of strings of the
format ``zone_name:#``, where ``zone_name`` is a zone as specified in