You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by we...@apache.org on 2020/02/20 16:54:38 UTC

[spark] branch branch-3.0 updated: [SPARK-30892][SQL] Exclude `spark.sql.variable.substitute.depth` from `removedSQLConfigs`

This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 45f2155  [SPARK-30892][SQL] Exclude `spark.sql.variable.substitute.depth` from `removedSQLConfigs`
45f2155 is described below

commit 45f2155654d1a219379737f7ee51311fa73a30d5
Author: Maxim Gekk <ma...@gmail.com>
AuthorDate: Fri Feb 21 00:44:09 2020 +0800

    [SPARK-30892][SQL] Exclude `spark.sql.variable.substitute.depth` from `removedSQLConfigs`
    
    ### What changes were proposed in this pull request?
    
    Exclude the SQL config `spark.sql.variable.substitute.depth` from `SQLConf.removedSQLConfigs`
    
    ### Why are the changes needed?
    By the #27169, the config was placed to `SQLConf.removedSQLConfigs`. And as a consequence of that when an user set it non-default value (1 for example),  he/she will get an exception. It is acceptable for SQL configs that could impact on the behavior but not for this particular config. Raising of such exception will just make migration to Spark 3.0 more difficult.
    
    ### Does this PR introduce any user-facing change?
    Yes, before the changes users get an exception when he/she set `spark.sql.variable.substitute.depth` to a value different from `40`.
    
    ### How was this patch tested?
    Run `spark.conf.set("spark.sql.variable.substitute.depth", 1)` in `spark-shell`.
    
    Closes #27646 from MaxGekk/remove-substitute-depth-conf.
    
    Authored-by: Maxim Gekk <ma...@gmail.com>
    Signed-off-by: Wenchen Fan <we...@databricks.com>
    (cherry picked from commit bb40ab09f4bcb0fd94b0649442cc0daa1162207d)
    Signed-off-by: Wenchen Fan <we...@databricks.com>
---
 sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala | 2 --
 1 file changed, 2 deletions(-)

diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index 2d72344..bcec22f 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -2264,8 +2264,6 @@ object SQLConf {
         "It was removed to prevent loosing of users data for non-default value."),
       RemovedConfig("spark.sql.legacy.compareDateTimestampInTimestamp", "3.0.0", "true",
         "It was removed to prevent errors like SPARK-23549 for non-default value."),
-      RemovedConfig("spark.sql.variable.substitute.depth", "3.0.0", "40",
-        "It was deprecated since Spark 2.1, and not used in Spark 2.4."),
       RemovedConfig("spark.sql.parquet.int64AsTimestampMillis", "3.0.0", "false",
         "The config was deprecated since Spark 2.3." +
         s"Use '${PARQUET_OUTPUT_TIMESTAMP_TYPE.key}' instead of it."),


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org