You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@couchdb.apache.org by fl...@apache.org on 2018/03/10 12:44:00 UTC

[couchdb-documentation] branch master updated: Grammar and formatting fixes (#254)

This is an automated email from the ASF dual-hosted git repository.

flimzy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/couchdb-documentation.git


The following commit(s) were added to refs/heads/master by this push:
     new 8a19e93  Grammar and formatting fixes (#254)
8a19e93 is described below

commit 8a19e93e6e47450a74b845c7a2e5378a9cda18eb
Author: Jonathan Hall <fl...@flimzy.com>
AuthorDate: Sat Mar 10 13:43:58 2018 +0100

    Grammar and formatting fixes (#254)
---
 src/cluster/sharding.rst | 12 ++++++------
 src/cluster/theory.rst   | 20 ++++++++++----------
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/src/cluster/sharding.rst b/src/cluster/sharding.rst
index 5e256d5..dec89d4 100644
--- a/src/cluster/sharding.rst
+++ b/src/cluster/sharding.rst
@@ -27,8 +27,8 @@ scale out.
 
 For simplicity we will start fresh and small.
 
-Start node1 and add a database to it. To keep it simple we will have 2 shards
-and no replicas.
+Start ``node1`` and add a database to it. To keep it simple we will have 2
+shards and no replicas.
 
 .. code-block:: bash
 
@@ -46,8 +46,8 @@ If you look in the directory ``data/shards`` you will find the 2 shards.
     |        -- small.1425202577.couch
 
 Now, check the node-local ``_dbs_`` database. Here, the metadata for each
-database is stored. As the database is called small, there is a document called
-small there. Let us look in it. Yes, you can get it with curl too:
+database is stored. As the database is called ``small``, there is a document
+called ``small`` there. Let us look in it. Yes, you can get it with curl too:
 
 .. code-block:: javascript
 
@@ -183,7 +183,7 @@ After PUTting this document, it's like magic: the shards are now on node2 too!
 We now have ``n=2``!
 
 If the shards are large, then you can copy them over manually and only have
-CouchDB syncing the changes from the last minutes instead.
+CouchDB sync the changes from the last minutes instead.
 
 .. _cluster/sharding/move:
 
@@ -254,7 +254,7 @@ without the users noticing anything.
 Views
 =====
 
-The views needs to be moved together with the shards. If you do not, then
+The views need to be moved together with the shards. If you do not, then
 CouchDB will rebuild them and this will take time if you have a lot of
 documents.
 
diff --git a/src/cluster/theory.rst b/src/cluster/theory.rst
index 2c0945a..e92bd6b 100644
--- a/src/cluster/theory.rst
+++ b/src/cluster/theory.rst
@@ -33,29 +33,29 @@ When creating a database you can send your own values with request and
 thereby override the defaults in ``default.ini``.
 
 In clustered operation, a quorum must be reached before CouchDB returns a
-``200`` for a fetch, or 201 for a write operation. A quorum is defined as one
-plus half the number of "relevant copies". "Relevant copies" is defined
+``200`` for a fetch, or ``201`` for a write operation. A quorum is defined as
+one plus half the number of "relevant copies". "Relevant copies" is defined
 slightly differently for read and write operations.
 
 For read operations, the number of relevant copies is the number of
 currently-accessible shards holding the requested data, meaning that in the case
 of a failure or network partition, the number of relevant copies may be lower
 than the number of replicas in the cluster.  The number of read copies can be
-set with the rparameter.
+set with the ``r`` parameter.
 
-For write operations the number of relevant copies is always `n`, the number of
-replicas in the cluster.  For write operations, the number of copies can be set
-using the w parameter. If fewer than this number of nodes is available, a 202
-will be returned.
+For write operations the number of relevant copies is always ``n``, the number
+of replicas in the cluster.  For write operations, the number of copies can be
+set using the w parameter. If fewer than this number of nodes is available, a
+``202`` will be returned.
 
 We will focus on the shards and replicas for now.
 
 A shard is a part of a database. The more shards, the more you can scale out.
 If you have 4 shards, that means that you can have at most 4 nodes. With one
-shard you can have only one node, just the way CouchDB 1.x is.
+shard you can have only one node, just as with CouchDB 1.x.
 
-Replicas adds fail resistance, as some nodes can be offline without everything
-comes crashing down.
+Replicas add failure resistance, as some nodes can be offline without everything
+crashing down.
 
 * ``n=1`` All nodes must be up.
 * ``n=2`` Any 1 node can be down.

-- 
To stop receiving notification emails like this one, please contact
flimzy@apache.org.