You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ryan Blue (JIRA)" <ji...@apache.org> on 2017/01/09 21:28:58 UTC

[jira] [Created] (SPARK-19138) Python: new HiveContext will use a stopped SparkContext

Ryan Blue created SPARK-19138:
---------------------------------

             Summary: Python: new HiveContext will use a stopped SparkContext
                 Key: SPARK-19138
                 URL: https://issues.apache.org/jira/browse/SPARK-19138
             Project: Spark
          Issue Type: Bug
          Components: PySpark
            Reporter: Ryan Blue


We have users that run a notebook cell that creates a new SparkContext to overwrite some of the default initial parameters:

{code:lang=python}
if 'sc' in globals():
    #Stop the running SparkContext if there is one running.
    sc.stop()

conf = SparkConf().setAppName("app")
#conf.set('spark.sql.shuffle.partitions', '2000')
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
{code}

In Spark 2.0, this creates an invalid SQLContext that uses the original SparkContext because the [HiveContext contstructor|https://github.com/apache/spark/blob/master/python/pyspark/sql/context.py#L514] uses SparkSession.getOrCreate that has the old SparkContext. A SparkSession should be invalidated and no longer returned by getOrCreate if its SparkContext has been stopped.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org