You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2017/08/01 18:05:59 UTC
spark git commit: [SPARK-21593][DOCS] Fix 2 rendering errors on
configuration page
Repository: spark
Updated Branches:
refs/heads/master 74cda94c5 -> b1d59e60d
[SPARK-21593][DOCS] Fix 2 rendering errors on configuration page
## What changes were proposed in this pull request?
Fix 2 rendering errors on configuration doc page, due to SPARK-21243 and SPARK-15355.
## How was this patch tested?
Manually built and viewed docs with jekyll
Author: Sean Owen <so...@cloudera.com>
Closes #18793 from srowen/SPARK-21593.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b1d59e60
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b1d59e60
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b1d59e60
Branch: refs/heads/master
Commit: b1d59e60dee2a41f8eff8ef29b3bcac69111e2f0
Parents: 74cda94
Author: Sean Owen <so...@cloudera.com>
Authored: Tue Aug 1 19:05:55 2017 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Tue Aug 1 19:05:55 2017 +0100
----------------------------------------------------------------------
docs/configuration.md | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/spark/blob/b1d59e60/docs/configuration.md
----------------------------------------------------------------------
diff --git a/docs/configuration.md b/docs/configuration.md
index 500f980..011d583 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -536,15 +536,17 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
- <td><code>spark.reducer.maxBlocksInFlightPerAddress</code></td>
- <td>Int.MaxValue</td>
- <td>
- This configuration limits the number of remote blocks being fetched per reduce task from a
- given host port. When a large number of blocks are being requested from a given address in a
- single fetch or simultaneously, this could crash the serving executor or Node Manager. This
- is especially useful to reduce the load on the Node Manager when external shuffle is enabled.
- You can mitigate this issue by setting it to a lower value.
- </td>
+ <td><code>spark.reducer.maxBlocksInFlightPerAddress</code></td>
+ <td>Int.MaxValue</td>
+ <td>
+ This configuration limits the number of remote blocks being fetched per reduce task from a
+ given host port. When a large number of blocks are being requested from a given address in a
+ single fetch or simultaneously, this could crash the serving executor or Node Manager. This
+ is especially useful to reduce the load on the Node Manager when external shuffle is enabled.
+ You can mitigate this issue by setting it to a lower value.
+ </td>
+</tr>
+<tr>
<td><code>spark.reducer.maxReqSizeShuffleToMem</code></td>
<td>Long.MaxValue</td>
<td>
@@ -1081,7 +1083,7 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
- <td><code>spark.storage.replication.proactive<code></td>
+ <td><code>spark.storage.replication.proactive</code></td>
<td>false</td>
<td>
Enables proactive block replication for RDD blocks. Cached RDD block replicas lost due to
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org