You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2023/02/05 11:08:14 UTC

[spark] branch master updated: [SPARK-42344][K8S] Change the default size of the CONFIG_MAP_MAXSIZE

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 9ac46408ec9 [SPARK-42344][K8S] Change the default size of the CONFIG_MAP_MAXSIZE
9ac46408ec9 is described below

commit 9ac46408ec943d5121bbc14f2ce0d8b2ff453de5
Author: Yan Wei <ni...@gmail.com>
AuthorDate: Sun Feb 5 03:08:01 2023 -0800

    [SPARK-42344][K8S] Change the default size of the CONFIG_MAP_MAXSIZE
    
    The default size of the CONFIG_MAP_MAXSIZE should not be greater than 1048576
    
    ### What changes were proposed in this pull request?
    This PR changed the default size of the  CONFIG_MAP_MAXSIZE from 1572864(1.5 MiB) to 1048576(1.0 MiB)
    
    ### Why are the changes needed?
    When a job is submitted by the spark to the K8S with a configmap , The Spark-Submit will call the K8S‘s POST API "api/v1/namespaces/default/configmaps". And the size of the configmaps will be validated by this K8S API,the max value shoud not be greater than 1048576.
    In the previous comment,the explain in https://etcd.io/docs/v3.4/dev-guide/limit/ is:
    "etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. By default, the maximum size of any request is 1.5 MiB. This limit is configurable through --max-request-bytes flag for etcd server."
    This explanation is from the perspective of etcd ,not K8S.
    So I think the default value of the configmap in Spark should not be greate than 1048576.
    
    ### Does this PR introduce _any_ user-facing change?
    Yes.
    Generally, the size of the configmap will not exceed 1572864 or even 1048576.
    So the problem solved here may not be perceived by users.
    
    ### How was this patch tested?
    local test
    
    Closes #39884 from ninebigbig/master.
    
    Authored-by: Yan Wei <ni...@gmail.com>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 .../core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala       | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
index e76351f6c02..042e9682730 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
@@ -162,7 +162,8 @@ private[spark] object Config extends Logging {
         " https://etcd.io/docs/v3.4.0/dev-guide/limit/ on k8s server end.")
       .version("3.1.0")
       .longConf
-      .createWithDefault(1572864) // 1.5 MiB
+      .checkValue(_ <= 1048576, "Must have at most 1048576 bytes")
+      .createWithDefault(1048576) // 1.0 MiB
 
   val EXECUTOR_ROLL_INTERVAL =
     ConfigBuilder("spark.kubernetes.executor.rollInterval")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org