You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "BryanCutler (via GitHub)" <gi...@apache.org> on 2023/05/24 19:21:39 UTC

[GitHub] [spark] BryanCutler commented on a diff in pull request #41240: [SPARK-43545][SQL][PYTHON] Support nested timestamp type

BryanCutler commented on code in PR #41240:
URL: https://github.com/apache/spark/pull/41240#discussion_r1204658486


##########
python/pyspark/sql/pandas/conversion.py:
##########
@@ -375,22 +379,105 @@ def _convert_from_pandas(
         assert isinstance(self, SparkSession)
 
         if timezone is not None:
-            from pyspark.sql.pandas.types import _check_series_convert_timestamps_tz_local
+            from pyspark.sql.pandas.types import (
+                _check_series_convert_timestamps_tz_local,
+                _get_local_timezone,
+            )
             from pandas.core.dtypes.common import is_datetime64tz_dtype, is_timedelta64_dtype
 
             copied = False
             if isinstance(schema, StructType):
-                for field in schema:
-                    # TODO: handle nested timestamps, such as ArrayType(TimestampType())?
-                    if isinstance(field.dataType, TimestampType):
-                        s = _check_series_convert_timestamps_tz_local(pdf[field.name], timezone)
-                        if s is not pdf[field.name]:
-                            if not copied:
-                                # Copy once if the series is modified to prevent the original
-                                # Pandas DataFrame from being updated
-                                pdf = pdf.copy()
-                                copied = True
-                            pdf[field.name] = s
+
+                def _create_converter(data_type: DataType) -> Callable[[pd.Series], pd.Series]:

Review Comment:
   Just wondering if you had considered "un-nesting" the Arrow field first, then applying the conversions on flat timestamp fields only, then putting the nested fields back together again? It would be a little more complicated to do this, but would have the benefit of working the same for any level of nested fields, and it's easier to work with Arrow nested fields vs in Pandas.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org