You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ge...@apache.org on 2021/08/19 08:30:14 UTC

[spark] branch branch-3.2 updated: [SPARK-35083][FOLLOW-UP][CORE] Add migration guide for the remote scheduler pool files support

This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.2 by this push:
     new 9544c24  [SPARK-35083][FOLLOW-UP][CORE] Add migration guide for the remote scheduler pool files support
9544c24 is described below

commit 9544c24560bdaf125560ee9b36e3b79374385f2f
Author: yi.wu <yi...@databricks.com>
AuthorDate: Thu Aug 19 16:28:59 2021 +0800

    [SPARK-35083][FOLLOW-UP][CORE] Add migration guide for the remote scheduler pool files support
    
    ### What changes were proposed in this pull request?
    
    Add remote scheduler pool files support to the migration guide.
    
    ### Why are the changes needed?
    
    To highlight this useful support.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Pass exiting tests.
    
    Closes #33785 from Ngone51/SPARK-35083-follow-up.
    
    Lead-authored-by: yi.wu <yi...@databricks.com>
    Co-authored-by: wuyi <yi...@databricks.com>
    Signed-off-by: Gengliang Wang <ge...@apache.org>
    (cherry picked from commit e3902d1975ee6a6a6f672eb6b4f318bcdd237e3f)
    Signed-off-by: Gengliang Wang <ge...@apache.org>
---
 docs/core-migration-guide.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md
index 1dee502..02ed430 100644
--- a/docs/core-migration-guide.md
+++ b/docs/core-migration-guide.md
@@ -24,6 +24,8 @@ license: |
 
 ## Upgrading from Core 3.1 to 3.2
 
+- Since Spark 3.2, the fair scheduler also supports reading a configuration file from a remote node. `spark.scheduler.allocation.file` can either be a local file path or HDFS file path.
+
 - Since Spark 3.2, `spark.hadoopRDD.ignoreEmptySplits` is set to `true` by default which means Spark will not create empty partitions for empty input splits. To restore the behavior before Spark 3.2, you can set `spark.hadoopRDD.ignoreEmptySplits` to `false`.
 
 - Since Spark 3.2, `spark.eventLog.compression.codec` is set to `zstd` by default which means Spark will not fallback to use `spark.io.compression.codec` anymore.

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org