You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Manish Gupta (Jira)" <ji...@apache.org> on 2021/07/20 10:57:00 UTC

[jira] [Commented] (SPARK-30002) Reuse SparkSession in pyspark via Gateway

    [ https://issues.apache.org/jira/browse/SPARK-30002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17384070#comment-17384070 ] 

Manish Gupta commented on SPARK-30002:
--------------------------------------

We recently encountered this use case where we wanted to run a scala spark job and pyspark job in spark local mode. Once a SparkContext was created by scala spark job it was not being used by the pyspark job and job was failing while trying to create a new SparkContext for the pyspark job.

The fix suggested by [~skp33] worked for us.

[~hyukjin.kwon] can we consider this fix. We are using spark 3.0.1 version.

> Reuse SparkSession in pyspark via Gateway
> -----------------------------------------
>
>                 Key: SPARK-30002
>                 URL: https://issues.apache.org/jira/browse/SPARK-30002
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>    Affects Versions: 3.1.0
>            Reporter: Kaushal Prajapati
>            Priority: Minor
>
> In PySpark, we create SparkContext via user spark configurations or the default ones, and it gets launched through py4j gateway internally.
> Let's say if I have launched py4j gateway from another application then to communicate with the same py4j gateway I have to set below configuration:-
>  
> {code:java}
> export PYSPARK_GATEWAY_PORT=12345
> export PYSPARK_GATEWAY_SECRET=***********************
> {code}
>  
> So when PySpark tries to create its own SparkContext after the communication has been set up, it doesn't check whether there is any available SparkContext in the same JVM.
> Current code snippet:-
>  
> {code:java}
> def _initialize_context(self, jconf):
>     """
>     Initialize SparkContext in function to allow subclass specific initialization
>     """
>     return self._jvm.JavaSparkContext(jconf){code}
>  
>  
> After changing it to the following, it works fine for me.
> {code:java}
> def _initialize_context(self, jconf):
>     """
>     Initialize SparkContext in function to allow subclass specific initialization
>     """
>     return self._jvm.JavaSparkContext(self._jvm.org.apache.spark.SparkContext.getOrCreate(jconf)){code}
>  
> It looks like a good use case for improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org