You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2016/02/22 00:27:09 UTC

spark git commit: [MINOR][DOCS] Fix typos in `configuration.md` and `hardware-provisioning.md`

Repository: spark
Updated Branches:
  refs/heads/master 6c3832b26 -> 03e62aa3f


[MINOR][DOCS] Fix typos in `configuration.md` and `hardware-provisioning.md`

## What changes were proposed in this pull request?

This PR fixes some typos in the following documentation files.
 * `NOTICE`, `configuration.md`, and `hardware-provisioning.md`.

## How was the this patch tested?

manual tests

Author: Dongjoon Hyun <dongjoonapache.org>

Author: Dongjoon Hyun <do...@apache.org>

Closes #11289 from dongjoon-hyun/minor_fix_typos_notice_and_confdoc.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/03e62aa3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/03e62aa3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/03e62aa3

Branch: refs/heads/master
Commit: 03e62aa3f6e16a271262c786be3d1542af79d3e4
Parents: 6c3832b
Author: Dongjoon Hyun <do...@apache.org>
Authored: Sun Feb 21 15:27:07 2016 -0800
Committer: Reynold Xin <rx...@databricks.com>
Committed: Sun Feb 21 15:27:07 2016 -0800

----------------------------------------------------------------------
 docs/configuration.md         | 10 +++++-----
 docs/hardware-provisioning.md |  2 +-
 2 files changed, 6 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/03e62aa3/docs/configuration.md
----------------------------------------------------------------------
diff --git a/docs/configuration.md b/docs/configuration.md
index f2443e9..568eca9 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -249,7 +249,7 @@ Apart from these, the following properties are also available, and may be useful
   <td>false</td>
   <td>
     (Experimental) Whether to give user-added jars precedence over Spark's own jars when loading
-    classes in the the driver. This feature can be used to mitigate conflicts between Spark's
+    classes in the driver. This feature can be used to mitigate conflicts between Spark's
     dependencies and user dependencies. It is currently an experimental feature.
 
     This is used in cluster mode only.
@@ -373,7 +373,7 @@ Apart from these, the following properties are also available, and may be useful
   <td>
     Reuse Python worker or not. If yes, it will use a fixed number of Python workers,
     does not need to fork() a Python process for every tasks. It will be very useful
-    if there is large broadcast, then the broadcast will not be needed to transfered
+    if there is large broadcast, then the broadcast will not be needed to transferred
     from JVM to Python worker for every task.
   </td>
 </tr>
@@ -1266,7 +1266,7 @@ Apart from these, the following properties are also available, and may be useful
   <td>
     Comma separated list of users/administrators that have view and modify access to all Spark jobs.
     This can be used if you run on a shared cluster and have a set of administrators or devs who
-    help debug when things work. Putting a "*" in the list means any user can have the priviledge
+    help debug when things work. Putting a "*" in the list means any user can have the privilege
     of admin.
   </td>
 </tr>
@@ -1604,7 +1604,7 @@ Apart from these, the following properties are also available, and may be useful
 #### Deploy
 
 <table class="table">
-  <tr><th>Property Name</th><th>Default</th><th>Meaniing</th></tr>
+  <tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
   <tr>
     <td><code>spark.deploy.recoveryMode</code></td>
     <td>NONE</td>
@@ -1693,7 +1693,7 @@ Spark uses [log4j](http://logging.apache.org/log4j/) for logging. You can config
 # Overriding configuration directory
 
 To specify a different configuration directory other than the default "SPARK_HOME/conf",
-you can set SPARK_CONF_DIR. Spark will use the the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc)
+you can set SPARK_CONF_DIR. Spark will use the configuration files (spark-defaults.conf, spark-env.sh, log4j.properties, etc)
 from this directory.
 
 # Inheriting Hadoop Cluster Configuration

http://git-wip-us.apache.org/repos/asf/spark/blob/03e62aa3/docs/hardware-provisioning.md
----------------------------------------------------------------------
diff --git a/docs/hardware-provisioning.md b/docs/hardware-provisioning.md
index 7902205..60ecb4f 100644
--- a/docs/hardware-provisioning.md
+++ b/docs/hardware-provisioning.md
@@ -63,7 +63,7 @@ from the application's monitoring UI (`http://<driver-node>:4040`).
 
 # CPU Cores
 
-Spark scales well to tens of CPU cores per machine because it performes minimal sharing between
+Spark scales well to tens of CPU cores per machine because it performs minimal sharing between
 threads. You should likely provision at least **8-16 cores** per machine. Depending on the CPU
 cost of your workload, you may also need more: once data is in memory, most applications are
 either CPU- or network-bound.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org