You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "David Figueroa (JIRA)" <ji...@apache.org> on 2018/04/25 20:11:00 UTC

[jira] [Updated] (SPARK-24092) spark.python.worker.reuse does not work?

     [ https://issues.apache.org/jira/browse/SPARK-24092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

David Figueroa updated SPARK-24092:
-----------------------------------
    Description: 
{{spark.python.worker.reuse is true by default but even after explicitly setting to true the code below does not print the same python worker process ids.}}
{code:java|title=procid.py|borderStyle=solid}
def return_pid(_): yield os.getpid()
spark = SparkSession.builder.getOrCreate()
pids = set(spark.sparkContext.range(32).mapPartitions(return_pid).collect())
print(pids)
pids = set(spark.sparkContext.range(32).mapPartitions(return_pid).collect())
print(pids){code}

  was:
{{spark.python.worker.reuse is true by default but even after explicitly setting to true the code below does not print the same python worker process ids.}}
{code:java|title=procid.py|borderStyle=solid}
def return_pid(_): yield os.getpid()
spark = SparkSession.builder.getOrCreate()
 pids = set(spark.sparkContext.range(32).mapPartitions(return_pid).collect())
print(pids)
pids = set(spark.sparkContext.range(32).mapPartitions(return_pid).collect())
print(pids){code}


> spark.python.worker.reuse does not work?
> ----------------------------------------
>
>                 Key: SPARK-24092
>                 URL: https://issues.apache.org/jira/browse/SPARK-24092
>             Project: Spark
>          Issue Type: Question
>          Components: PySpark
>    Affects Versions: 2.3.0
>            Reporter: David Figueroa
>            Priority: Minor
>
> {{spark.python.worker.reuse is true by default but even after explicitly setting to true the code below does not print the same python worker process ids.}}
> {code:java|title=procid.py|borderStyle=solid}
> def return_pid(_): yield os.getpid()
> spark = SparkSession.builder.getOrCreate()
> pids = set(spark.sparkContext.range(32).mapPartitions(return_pid).collect())
> print(pids)
> pids = set(spark.sparkContext.range(32).mapPartitions(return_pid).collect())
> print(pids){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org