You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by we...@apache.org on 2020/02/06 12:59:39 UTC
[spark] branch branch-3.0 updated: [SPARK-26700][CORE][FOLLOWUP]
Add config `spark.network.maxRemoteBlockSizeFetchToMem`
This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push:
new 4546f12 [SPARK-26700][CORE][FOLLOWUP] Add config `spark.network.maxRemoteBlockSizeFetchToMem`
4546f12 is described below
commit 4546f128c17b73f8eaaef7524148d588c304a9d4
Author: Yuanjian Li <xy...@gmail.com>
AuthorDate: Thu Feb 6 20:53:44 2020 +0800
[SPARK-26700][CORE][FOLLOWUP] Add config `spark.network.maxRemoteBlockSizeFetchToMem`
### What changes were proposed in this pull request?
Add new config `spark.network.maxRemoteBlockSizeFetchToMem` fallback to the old config `spark.maxRemoteBlockSizeFetchToMem`.
### Why are the changes needed?
For naming consistency.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Existing tests.
Closes #27463 from xuanyuanking/SPARK-26700-follow.
Authored-by: Yuanjian Li <xy...@gmail.com>
Signed-off-by: Wenchen Fan <we...@databricks.com>
(cherry picked from commit d8613571bc1847775dd5c1945757279234cb388c)
Signed-off-by: Wenchen Fan <we...@databricks.com>
---
core/src/main/scala/org/apache/spark/SparkConf.scala | 3 ++-
core/src/main/scala/org/apache/spark/internal/config/package.scala | 2 +-
docs/configuration.md | 2 +-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/core/src/main/scala/org/apache/spark/SparkConf.scala b/core/src/main/scala/org/apache/spark/SparkConf.scala
index 0e0291d..40915e3 100644
--- a/core/src/main/scala/org/apache/spark/SparkConf.scala
+++ b/core/src/main/scala/org/apache/spark/SparkConf.scala
@@ -684,7 +684,8 @@ private[spark] object SparkConf extends Logging {
"spark.yarn.jars" -> Seq(
AlternateConfig("spark.yarn.jar", "2.0")),
MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM.key -> Seq(
- AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3")),
+ AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3"),
+ AlternateConfig("spark.maxRemoteBlockSizeFetchToMem", "3.0")),
LISTENER_BUS_EVENT_QUEUE_CAPACITY.key -> Seq(
AlternateConfig("spark.scheduler.listenerbus.eventqueue.size", "2.3")),
DRIVER_MEMORY_OVERHEAD.key -> Seq(
diff --git a/core/src/main/scala/org/apache/spark/internal/config/package.scala b/core/src/main/scala/org/apache/spark/internal/config/package.scala
index f91f31b..02acb6b 100644
--- a/core/src/main/scala/org/apache/spark/internal/config/package.scala
+++ b/core/src/main/scala/org/apache/spark/internal/config/package.scala
@@ -895,7 +895,7 @@ package object config {
.createWithDefault(Int.MaxValue)
private[spark] val MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM =
- ConfigBuilder("spark.maxRemoteBlockSizeFetchToMem")
+ ConfigBuilder("spark.network.maxRemoteBlockSizeFetchToMem")
.doc("Remote block will be fetched to disk when size of the block is above this threshold " +
"in bytes. This is to avoid a giant request takes too much memory. Note this " +
"configuration will affect both shuffle fetch and block manager remote block fetch. " +
diff --git a/docs/configuration.md b/docs/configuration.md
index 2febfe9..5bd3f3e 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -1810,7 +1810,7 @@ Apart from these, the following properties are also available, and may be useful
</td>
</tr>
<tr>
- <td><code>spark.maxRemoteBlockSizeFetchToMem</code></td>
+ <td><code>spark.network.maxRemoteBlockSizeFetchToMem</code></td>
<td>200m</td>
<td>
Remote block will be fetched to disk when size of the block is above this threshold
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org