You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Shixiong Zhu (JIRA)" <ji...@apache.org> on 2019/01/11 19:48:00 UTC
[jira] [Resolved] (SPARK-26586) Streaming queries should have
isolated SparkSessions and confs
[ https://issues.apache.org/jira/browse/SPARK-26586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Shixiong Zhu resolved SPARK-26586.
----------------------------------
Resolution: Fixed
Assignee: Mukul Murthy
Fix Version/s: 3.0.0
2.4.1
> Streaming queries should have isolated SparkSessions and confs
> --------------------------------------------------------------
>
> Key: SPARK-26586
> URL: https://issues.apache.org/jira/browse/SPARK-26586
> Project: Spark
> Issue Type: Bug
> Components: SQL, Structured Streaming
> Affects Versions: 2.3.0, 2.4.0
> Reporter: Mukul Murthy
> Assignee: Mukul Murthy
> Priority: Major
> Fix For: 2.4.1, 3.0.0
>
>
> When a stream is started, the stream's config is supposed to be frozen and all batches run with the config at start time. However, due to a race condition in creating streams, updating a conf value in the active spark session immediately after starting a stream can lead to the stream getting that updated value.
>
> The problem is that when StreamingQueryManager creates a MicrobatchExecution (or ContinuousExecution), it passes in the shared spark session, and the spark session isn't cloned until StreamExecution.start() is called. DataStreamWriter.start() should not return until the SparkSession is cloned.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org