You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "HyukjinKwon (via GitHub)" <gi...@apache.org> on 2023/05/18 10:51:49 UTC

[GitHub] [spark] HyukjinKwon opened a new pull request, #41215: [SPARK-43574][PYTHON] Support to set Python executable in workers during runtime

HyukjinKwon opened a new pull request, #41215:
URL: https://github.com/apache/spark/pull/41215

   ### What changes were proposed in this pull request?
   
   This PR proposes a new configuration `spark.sql.execution.pyspark.python` that sets the Python executable on worker nodes.
   
   Note that, if the Python executable is different from the one previously ran, it will creates a new Python worker processes because we reuse Python workers but they are unique by both executable path and env variables as a key:
   
   https://github.com/apache/spark/blob/d7a8b852eaa6cc04df1eea0018a9b9de29b1c4fe/core/src/main/scala/org/apache/spark/SparkEnv.scala#L123-L124
   
   This PR is also a basework for Spark Connect to support a different set of dependencies.
   
   ### Why are the changes needed?
   
   This can be useful especially when you want to run your Python with a different set of dependencies during runtime (see also https://www.databricks.com/blog/2020/12/22/how-to-manage-python-dependencies-in-pyspark.html)
   
   ### Does this PR introduce _any_ user-facing change?
   
   No, this PR adds a configuration but that's internal for now.
   
   ### How was this patch tested?
   
   Manually tested as below:
   
   ```python
   import sys
   from pyspark.sql.functions import udf
   spark.range(1).select(udf(lambda x: sys.executable)("id")).show(truncate=False)
   spark.conf.set("spark.sql.execution.pyspark.python", "/Users/hyukjin.kwon/miniconda3/envs/another-python/bin/python")
   spark.range(1).select(udf(lambda x: sys.executable)("id")).show(truncate=False)
   ```
   
   ```
   +---------------------------------------------------------+
   |<lambda>(id)                                             |
   +---------------------------------------------------------+
   |/.../miniconda3/envs/python3.9/bin/python3|
   +---------------------------------------------------------+
   
   +-------------------------------------------------------------+
   |<lambda>(id)                                                 |
   +-------------------------------------------------------------+
   |/.../miniconda3/envs/another-python/bin/python|
   +-------------------------------------------------------------+
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] dongjoon-hyun closed pull request #41215: [SPARK-43574][PYTHON][SQL] Support to set Python executable for UDF and pandas function APIs in workers during runtime

Posted by "dongjoon-hyun (via GitHub)" <gi...@apache.org>.
dongjoon-hyun closed pull request #41215: [SPARK-43574][PYTHON][SQL] Support to set Python executable for UDF and pandas function APIs in workers during runtime
URL: https://github.com/apache/spark/pull/41215


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org