You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2022/07/25 21:34:23 UTC

[spark] branch master updated: [SPARK-39861][PYTHON][DOCS] Deprecate `Python 3.7` Support

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 2f812c03c89 [SPARK-39861][PYTHON][DOCS] Deprecate `Python 3.7` Support
2f812c03c89 is described below

commit 2f812c03c89d7057fd86aee0186795dcbcb7cffb
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Mon Jul 25 14:33:39 2022 -0700

    [SPARK-39861][PYTHON][DOCS] Deprecate `Python 3.7` Support
    
    ### What changes were proposed in this pull request?
    
    This PR aims to deprecate Python 3.7 support at Apache Spark 3.4.0.
    
    ### Why are the changes needed?
    
    Apache Spark 3.4 will be released on February 2023 and will be supported for next 18 months.
    - https://spark.apache.org/versioning-policy.html
    
    Before Spark 3.4.1, Python 3.7 is going to reach `End Of Support` on 2023-06-27. Although Apache Spark 3.4 will work with Python 3.7 still for a while, there is no proper and official way to support Python 3.7 from that time.
    - https://www.python.org/downloads/
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, but the users will see only a deprecation warning log and docs.
    
    ### How was this patch tested?
    
    Pass the CIs.
    
    Closes #37279 from dongjoon-hyun/SPARK-39861.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 docs/index.md                 | 1 +
 docs/rdd-programming-guide.md | 1 +
 python/pyspark/context.py     | 8 ++++++++
 3 files changed, 10 insertions(+)

diff --git a/docs/index.md b/docs/index.md
index c6caf31d560..9c5b1e09658 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -40,6 +40,7 @@ source, visit [Building Spark](building-spark.html).
 Spark runs on both Windows and UNIX-like systems (e.g. Linux, Mac OS), and it should run on any platform that runs a supported version of Java. This should include JVMs on x86_64 and ARM64. It's easy to run locally on one machine --- all you need is to have `java` installed on your system `PATH`, or the `JAVA_HOME` environment variable pointing to a Java installation.
 
 Spark runs on Java 8/11/17, Scala 2.12/2.13, Python 3.7+ and R 3.5+.
+Python 3.7 support is deprecated as of Spark 3.4.0.
 Java 8 prior to version 8u201 support is deprecated as of Spark 3.2.0.
 For the Scala API, Spark {{site.SPARK_VERSION}}
 uses Scala {{site.SCALA_BINARY_VERSION}}. You will need to use a compatible Scala version
diff --git a/docs/rdd-programming-guide.md b/docs/rdd-programming-guide.md
index 4234eb6365f..275c5ccf433 100644
--- a/docs/rdd-programming-guide.md
+++ b/docs/rdd-programming-guide.md
@@ -106,6 +106,7 @@ so C libraries like NumPy can be used. It also works with PyPy 7.3.6+.
 
 Python 2, 3.4 and 3.5 supports were removed in Spark 3.1.0.
 Python 3.6 support was removed in Spark 3.3.0.
+Python 3.7 support is deprecated as of Spark 3.4.0.
 
 Spark applications in Python can either be run with the `bin/spark-submit` script which includes Spark at runtime, or by including it in your setup.py as:
 
diff --git a/python/pyspark/context.py b/python/pyspark/context.py
index 11d75f4f99a..fb6cb54d021 100644
--- a/python/pyspark/context.py
+++ b/python/pyspark/context.py
@@ -306,6 +306,14 @@ class SparkContext:
         self.pythonExec = os.environ.get("PYSPARK_PYTHON", "python3")
         self.pythonVer = "%d.%d" % sys.version_info[:2]
 
+        if sys.version_info[:2] < (3, 8):
+            with warnings.catch_warnings():
+                warnings.simplefilter("once")
+                warnings.warn(
+                    "Python 3.7 support is deprecated in Spark 3.4.",
+                    FutureWarning
+                )
+
         # Broadcast's __reduce__ method stores Broadcast instances here.
         # This allows other code to determine which Broadcast instances have
         # been pickled, so it can determine which Java broadcast objects to


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org