You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/01/09 22:01:01 UTC

[jira] [Commented] (SPARK-19138) Python: new HiveContext will use a stopped SparkContext

    [ https://issues.apache.org/jira/browse/SPARK-19138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15812984#comment-15812984 ] 

Apache Spark commented on SPARK-19138:
--------------------------------------

User 'rdblue' has created a pull request for this issue:
https://github.com/apache/spark/pull/16519

> Python: new HiveContext will use a stopped SparkContext
> -------------------------------------------------------
>
>                 Key: SPARK-19138
>                 URL: https://issues.apache.org/jira/browse/SPARK-19138
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>            Reporter: Ryan Blue
>
> We have users that run a notebook cell that creates a new SparkContext to overwrite some of the default initial parameters:
> {code:lang=python}
> if 'sc' in globals():
>     #Stop the running SparkContext if there is one running.
>     sc.stop()
> conf = SparkConf().setAppName("app")
> #conf.set('spark.sql.shuffle.partitions', '2000')
> sc = SparkContext(conf=conf)
> sqlContext = HiveContext(sc)
> {code}
> In Spark 2.0, this creates an invalid SQLContext that uses the original SparkContext because the [HiveContext contstructor|https://github.com/apache/spark/blob/master/python/pyspark/sql/context.py#L514] uses SparkSession.getOrCreate that has the old SparkContext. A SparkSession should be invalidated and no longer returned by getOrCreate if its SparkContext has been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org