You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "xinrong-meng (via GitHub)" <gi...@apache.org> on 2023/04/20 00:38:58 UTC

[GitHub] [spark] xinrong-meng opened a new pull request, #40864: Nested DataType compatibility in Arrow-optimized Python UDF

xinrong-meng opened a new pull request, #40864:
URL: https://github.com/apache/spark/pull/40864

   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] jaceklaskowski commented on a diff in pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF

Posted by "jaceklaskowski (via GitHub)" <gi...@apache.org>.
jaceklaskowski commented on code in PR #40864:
URL: https://github.com/apache/spark/pull/40864#discussion_r1172980822


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -167,26 +174,23 @@ def __init__(self, timezone, safecheck, assign_cols_by_name):
         self._safecheck = safecheck
         self._assign_cols_by_name = assign_cols_by_name
 
-    def arrow_to_pandas(self, arrow_column):
-        from pyspark.sql.pandas.types import (
-            _check_series_localize_timestamps,
-            _convert_map_items_to_dict,
-        )
-        import pyarrow
-
-        # If the given column is a date type column, creates a series of datetime.date directly
-        # instead of creating datetime64[ns] as intermediate data to avoid overflow caused by
-        # datetime64[ns] type handling.
-        s = arrow_column.to_pandas(date_as_object=True)
+    def arrow_to_pandas(self, arrow_column) -> pd.Series:
+        """Convert an Arrow column to a pandas Seires."""

Review Comment:
   nit: Converts + a typo in Series (perhaps I'd even opt for `pandas.Series` in the doc)



##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -167,26 +174,23 @@ def __init__(self, timezone, safecheck, assign_cols_by_name):
         self._safecheck = safecheck
         self._assign_cols_by_name = assign_cols_by_name
 
-    def arrow_to_pandas(self, arrow_column):
-        from pyspark.sql.pandas.types import (
-            _check_series_localize_timestamps,
-            _convert_map_items_to_dict,
-        )
-        import pyarrow
-
-        # If the given column is a date type column, creates a series of datetime.date directly
-        # instead of creating datetime64[ns] as intermediate data to avoid overflow caused by
-        # datetime64[ns] type handling.
-        s = arrow_column.to_pandas(date_as_object=True)
+    def arrow_to_pandas(self, arrow_column) -> pd.Series:
+        """Convert an Arrow column to a pandas Seires."""
+        pser = arrow_column.to_pandas(date_as_object=True)
+        if type_require_conversion(arrow_column.type):
+            # We have to do per-value conversion for nested types.
+            return pser.map(lambda v: recursive_convert_inputs(v, arrow_column.type))

Review Comment:
   "Fluent Python" 2nd Ed p. 27 says: "I used to believe that `map` and `filter` were faster than the equivalent listcomps, but Alex Martelli pointed out that's not the case". It's also said that using `listcomps` is more pythonic.
   
   Can we rewrite the `map` to the following?
   
   ```
   return [recursive_convert_inputs(v, arrow_column.type) for v in pser]
   ```
   
   I'm curious if it's possible here (didn't check the types) and commenting to learn more Python 😉 
   
   ```py
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40864:
URL: https://github.com/apache/spark/pull/40864#discussion_r1173139423


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -167,26 +174,23 @@ def __init__(self, timezone, safecheck, assign_cols_by_name):
         self._safecheck = safecheck
         self._assign_cols_by_name = assign_cols_by_name
 
-    def arrow_to_pandas(self, arrow_column):
-        from pyspark.sql.pandas.types import (
-            _check_series_localize_timestamps,
-            _convert_map_items_to_dict,
-        )
-        import pyarrow
-
-        # If the given column is a date type column, creates a series of datetime.date directly
-        # instead of creating datetime64[ns] as intermediate data to avoid overflow caused by
-        # datetime64[ns] type handling.
-        s = arrow_column.to_pandas(date_as_object=True)
+    def arrow_to_pandas(self, arrow_column) -> pd.Series:
+        """Convert an Arrow column to a pandas Seires."""

Review Comment:
   Thanks!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on a diff in pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on code in PR #40864:
URL: https://github.com/apache/spark/pull/40864#discussion_r1173143012


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -167,26 +174,23 @@ def __init__(self, timezone, safecheck, assign_cols_by_name):
         self._safecheck = safecheck
         self._assign_cols_by_name = assign_cols_by_name
 
-    def arrow_to_pandas(self, arrow_column):
-        from pyspark.sql.pandas.types import (
-            _check_series_localize_timestamps,
-            _convert_map_items_to_dict,
-        )
-        import pyarrow
-
-        # If the given column is a date type column, creates a series of datetime.date directly
-        # instead of creating datetime64[ns] as intermediate data to avoid overflow caused by
-        # datetime64[ns] type handling.
-        s = arrow_column.to_pandas(date_as_object=True)
+    def arrow_to_pandas(self, arrow_column) -> pd.Series:
+        """Convert an Arrow column to a pandas Seires."""
+        pser = arrow_column.to_pandas(date_as_object=True)
+        if type_require_conversion(arrow_column.type):
+            # We have to do per-value conversion for nested types.
+            return pser.map(lambda v: recursive_convert_inputs(v, arrow_column.type))

Review Comment:
   Thanks for sharing! We expect a pandas Series so `pser.map` would be more appropriate considering the return type.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on PR #40864:
URL: https://github.com/apache/spark/pull/40864#issuecomment-1517079636

   cc @haiyangsun-db FYI


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng commented on pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng commented on PR #40864:
URL: https://github.com/apache/spark/pull/40864#issuecomment-1518288506

   After double thoughts, we'd better not touch Pandas UDF to preserve backward compatibility. Let me close the PR and have a new prototype.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] xinrong-meng closed pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF

Posted by "xinrong-meng (via GitHub)" <gi...@apache.org>.
xinrong-meng closed pull request #40864: [WIP] Nested DataType compatibility in Arrow-optimized Python UDF and Pandas UDF
URL: https://github.com/apache/spark/pull/40864


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org