You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:33:47 UTC

[jira] [Resolved] (SPARK-16263) SparkSession caches configuration in an unituitive global way

     [ https://issues.apache.org/jira/browse/SPARK-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-16263.
----------------------------------
    Resolution: Incomplete

> SparkSession caches configuration in an unituitive global way
> -------------------------------------------------------------
>
>                 Key: SPARK-16263
>                 URL: https://issues.apache.org/jira/browse/SPARK-16263
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: Vladimir Feinberg
>            Priority: Minor
>              Labels: bulk-closed
>
> The following use case demonstrates the issue. Note that as a workaround to SPARK-16262 I use {{reset_spark()}} to stop the current {{SparkSession}}.
> {code} 
> >>> from pyspark.sql import SparkSession
> >>> def reset_spark(): global spark; spark.stop(); SparkSession._instantiatedContext = None
> ... 
> >>> spark = SparkSession.builder.getOrCreate()
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel).
> 16/06/28 11:41:36 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 16/06/28 11:41:36 WARN Utils: Your hostname, vlad-databricks resolves to a loopback address: 127.0.1.1; using 192.168.3.166 instead (on interface enp0s31f6)
> 16/06/28 11:41:36 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
> >>> spark.conf.get("spark.sql.retainGroupColumns")
> u'true'
> >>> reset_spark()
> >>> spark = SparkSession.builder.config("spark.sql.retainGroupColumns", "false").getOrCreate()
> >>> spark.conf.get("spark.sql.retainGroupColumns")
> u'false'
> >>> reset_spark()
> >>> spark = SparkSession.builder.getOrCreate()
> >>> spark.conf.get("spark.sql.retainGroupColumns")
> u'false'
> >>> 
> {code}
> The last line should output {{u'true'}} instead - there is absolutely no expectation for global config state to persist across sessions, which should use default configuration unless deviated from in each session's specific builder.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org