You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "ueshin (via GitHub)" <gi...@apache.org> on 2023/05/04 00:04:05 UTC

[GitHub] [spark] ueshin opened a new pull request, #41041: [SPARK-43363][SQL][PYTHON] Remove a workaround for pandas categorical type for pyarrow

ueshin opened a new pull request, #41041:
URL: https://github.com/apache/spark/pull/41041

   ### What changes were proposed in this pull request?
   
   Removes a workaround for pandas categorical type for pyarrow.
   
   ### Why are the changes needed?
   
   Now that the minimum version of pyarrow is `1.0.0`, a workaround for pandas' categorical type for pyarrow can be removed.
   
   > Note: This can be removed once minimum pyarrow version is >= 0.16.1
   
   ### Does this PR introduce _any_ user-facing change?
   
   No.
   
   ### How was this patch tested?
   
   Existing tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ueshin commented on a diff in pull request #41041: [SPARK-43363][SQL][PYTHON] Remove a workaround for pandas categorical type for pyarrow

Posted by "ueshin (via GitHub)" <gi...@apache.org>.
ueshin commented on code in PR #41041:
URL: https://github.com/apache/spark/pull/41041#discussion_r1184526296


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -226,9 +225,6 @@ def create_array(s, t):
                 s = _check_series_convert_timestamps_internal(s, self._timezone)
             elif t is not None and pa.types.is_map(t):
                 s = _convert_dict_to_map_items(s)
-            elif is_categorical_dtype(s.dtype):
-                # Note: This can be removed once minimum pyarrow version is >= 0.16.1
-                s = s.astype(s.dtypes.categories.dtype)

Review Comment:
   @BryanCutler Seems like if `t is None`, `pa.Array.from_pandas(s, mask=mask, type=t, safe=self._safecheck)` handles the categorical type as integer (`tinyint` as its code) instead of some type of `s.dtypes.categories.dtype`. Is that an expected behavior?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] BryanCutler commented on a diff in pull request #41041: [SPARK-43363][SQL][PYTHON] Remove a workaround for pandas categorical type for pyarrow

Posted by "BryanCutler (via GitHub)" <gi...@apache.org>.
BryanCutler commented on code in PR #41041:
URL: https://github.com/apache/spark/pull/41041#discussion_r1185559592


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -226,9 +225,6 @@ def create_array(s, t):
                 s = _check_series_convert_timestamps_internal(s, self._timezone)
             elif t is not None and pa.types.is_map(t):
                 s = _convert_dict_to_map_items(s)
-            elif is_categorical_dtype(s.dtype):
-                # Note: This can be removed once minimum pyarrow version is >= 0.16.1
-                s = s.astype(s.dtypes.categories.dtype)

Review Comment:
   From what I remember, for versions >= 0.16.1 pyarrow would automatically cast a categorical type to what the requested type of `s` was and the result was the correct type without this elif block. This was the comment where it was brought up https://github.com/apache/spark/pull/26585#discussion_r401853266
   
   I remember testing it out locally without this, but that was quite a while ago so something might have changed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ueshin commented on a diff in pull request #41041: [SPARK-43363][SQL][PYTHON] Remove a workaround for pandas categorical type for pyarrow

Posted by "ueshin (via GitHub)" <gi...@apache.org>.
ueshin commented on code in PR #41041:
URL: https://github.com/apache/spark/pull/41041#discussion_r1184531070


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -226,9 +225,6 @@ def create_array(s, t):
                 s = _check_series_convert_timestamps_internal(s, self._timezone)
             elif t is not None and pa.types.is_map(t):
                 s = _convert_dict_to_map_items(s)
-            elif is_categorical_dtype(s.dtype):
-                # Note: This can be removed once minimum pyarrow version is >= 0.16.1
-                s = s.astype(s.dtypes.categories.dtype)

Review Comment:
   so we can't remove it if `t is None`. or we need to support `dictionary` type.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] BryanCutler commented on a diff in pull request #41041: [SPARK-43363][SQL][PYTHON] Remove a workaround for pandas categorical type for pyarrow

Posted by "BryanCutler (via GitHub)" <gi...@apache.org>.
BryanCutler commented on code in PR #41041:
URL: https://github.com/apache/spark/pull/41041#discussion_r1185559592


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -226,9 +225,6 @@ def create_array(s, t):
                 s = _check_series_convert_timestamps_internal(s, self._timezone)
             elif t is not None and pa.types.is_map(t):
                 s = _convert_dict_to_map_items(s)
-            elif is_categorical_dtype(s.dtype):
-                # Note: This can be removed once minimum pyarrow version is >= 0.16.1
-                s = s.astype(s.dtypes.categories.dtype)

Review Comment:
   From what I remember, for versions >= 0.16.1 pyarrow would automatically cast a categorical type to what the requested type of `s` was and the result was the correct type without this elif block. This was the comment where it was brought up https://github.com/apache/spark/pull/26585#discussion_r401853266
   
   I remember testing it out locally without this, but that was quite a while ago so something might have changed. That could have also only been the case for when `t is not None`. So it makes sense to keep it for that case.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #41041: [SPARK-43363][SQL][PYTHON] Make to call `astype` to the category type only when the arrow type is not provided

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on PR #41041:
URL: https://github.com/apache/spark/pull/41041#issuecomment-1535567226

   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] ueshin commented on a diff in pull request #41041: [SPARK-43363][SQL][PYTHON] Remove a workaround for pandas categorical type for pyarrow

Posted by "ueshin (via GitHub)" <gi...@apache.org>.
ueshin commented on code in PR #41041:
URL: https://github.com/apache/spark/pull/41041#discussion_r1184530406


##########
python/pyspark/sql/pandas/serializers.py:
##########
@@ -226,9 +225,6 @@ def create_array(s, t):
                 s = _check_series_convert_timestamps_internal(s, self._timezone)
             elif t is not None and pa.types.is_map(t):
                 s = _convert_dict_to_map_items(s)
-            elif is_categorical_dtype(s.dtype):
-                # Note: This can be removed once minimum pyarrow version is >= 0.16.1
-                s = s.astype(s.dtypes.categories.dtype)

Review Comment:
   ah, it will be `dictionary<values=string, indices=int8, ordered=0>` if we don't specify `type`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #41041: [SPARK-43363][SQL][PYTHON] Make to call `astype` to the category type only when the arrow type is not provided

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon closed pull request #41041: [SPARK-43363][SQL][PYTHON] Make to call `astype` to the category type only when the arrow type is not provided
URL: https://github.com/apache/spark/pull/41041


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org