You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kudu.apache.org by to...@apache.org on 2017/08/01 21:10:49 UTC

[3/4] kudu git commit: [docs] fixed typos in raft-config-change.md

[docs] fixed typos in raft-config-change.md

Change-Id: Ie1f43915b5f2a393957bc0d6b9e120f7419c72b1
Reviewed-on: http://gerrit.cloudera.org:8080/7548
Tested-by: Kudu Jenkins
Reviewed-by: Todd Lipcon <to...@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/kudu/repo
Commit: http://git-wip-us.apache.org/repos/asf/kudu/commit/ec93064c
Tree: http://git-wip-us.apache.org/repos/asf/kudu/tree/ec93064c
Diff: http://git-wip-us.apache.org/repos/asf/kudu/diff/ec93064c

Branch: refs/heads/master
Commit: ec93064c921b56d0a6c79427eb313d5d1f5c24f1
Parents: af34314
Author: Alexey Serbin <as...@cloudera.com>
Authored: Mon Jul 31 21:12:53 2017 -0700
Committer: Todd Lipcon <to...@apache.org>
Committed: Tue Aug 1 21:08:20 2017 +0000

----------------------------------------------------------------------
 docs/design-docs/raft-config-change.md | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kudu/blob/ec93064c/docs/design-docs/raft-config-change.md
----------------------------------------------------------------------
diff --git a/docs/design-docs/raft-config-change.md b/docs/design-docs/raft-config-change.md
index d598832..728fe4b 100644
--- a/docs/design-docs/raft-config-change.md
+++ b/docs/design-docs/raft-config-change.md
@@ -20,12 +20,11 @@ tablet. The use cases for this functionality include:
 
 * Replacing a failed tablet server to maintain the desired replication
   factor of tablet data.
-* Growing the Kudu cluster over time. "Rebalancing" tablet locations to even
-* out the load across tablet
-  servers.
+* Growing the Kudu cluster over time. This might need "rebalancing" tablet
+  locations to even out the load across tablet servers.
 * Increasing the replication of one or more tablets of a table if they
-  become hot (eg in a time series workload, making today’s partitions have a
-  higher replication)
+  become hot (e.g. in a time series workload, making today’s partitions have a
+  higher replication).
 
 ## Scope
 This document covers the following topics:
@@ -236,7 +235,7 @@ and replay all data, it may be tens of minutes or even hours before regaining
 fault tolerance.
 
 As an example, consider the case of a four-node cluster, each node having 1TB
-of replica data. If a node fails, then its 1TB worth of data must be transfered
+of replica data. If a node fails, then its 1TB worth of data must be transferred
 among the remaining nodes, so we need to wait for 300+GB of data to transfer,
 which could take up to an hour. During that hour, we would have no
 latency-leveling on writes unless we did something like the above.