You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2023/02/02 01:38:29 UTC
[spark] branch master updated: [SPARK-42275][CONNECT][PYTHON] Avoid using built-in list, dict in static typing
This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 8cdd2683101 [SPARK-42275][CONNECT][PYTHON] Avoid using built-in list, dict in static typing
8cdd2683101 is described below
commit 8cdd2683101eef29f003a5d5c7c4fad14bf00cf0
Author: Ruifeng Zheng <ru...@apache.org>
AuthorDate: Thu Feb 2 10:38:17 2023 +0900
[SPARK-42275][CONNECT][PYTHON] Avoid using built-in list, dict in static typing
### What changes were proposed in this pull request?
Avoid using built-in list, dict in static typing
### Why are the changes needed?
it's not supported in python 3.8, see https://peps.python.org/pep-0585/
python 3.8
```
>>> from typing import List
>>> a : List[str] = ["a", "b"]
>>> a : list[str] = ["a", "b"]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'type' object is not subscriptable
```
python 3.9
```
>>> from typing import List
>>> a : List[str] = ["a", "b"]
>>> a : list[str] = ["a", "b"]
```
### Does this PR introduce _any_ user-facing change?
no
### How was this patch tested?
manually check
Closes #39844 from zhengruifeng/connect_fix_list.
Authored-by: Ruifeng Zheng <ru...@apache.org>
Signed-off-by: Hyukjin Kwon <gu...@apache.org>
---
python/pyspark/sql/connect/expressions.py | 2 +-
python/pyspark/sql/connect/plan.py | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/python/pyspark/sql/connect/expressions.py b/python/pyspark/sql/connect/expressions.py
index 04d70beeddd..27ffa51bf87 100644
--- a/python/pyspark/sql/connect/expressions.py
+++ b/python/pyspark/sql/connect/expressions.py
@@ -132,7 +132,7 @@ class CaseWhen(Expression):
class ColumnAlias(Expression):
- def __init__(self, parent: Expression, alias: list[str], metadata: Any):
+ def __init__(self, parent: Expression, alias: Sequence[str], metadata: Any):
self._alias = alias
self._metadata = metadata
diff --git a/python/pyspark/sql/connect/plan.py b/python/pyspark/sql/connect/plan.py
index 9a560c84ea4..fcc9f3ff4cb 100644
--- a/python/pyspark/sql/connect/plan.py
+++ b/python/pyspark/sql/connect/plan.py
@@ -1336,7 +1336,7 @@ class WriteOperation(LogicalPlan):
self.mode: Optional[str] = None
self.sort_cols: List[str] = []
self.partitioning_cols: List[str] = []
- self.options: dict[str, Optional[str]] = {}
+ self.options: Dict[str, Optional[str]] = {}
self.num_buckets: int = -1
self.bucket_cols: List[str] = []
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org