You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "itholic (via GitHub)" <gi...@apache.org> on 2023/09/19 12:02:32 UTC

[GitHub] [spark] itholic opened a new pull request, #42994: [SPARK-43433][PS] Match `GroupBy.nth` behavior to the latest Pandas

itholic opened a new pull request, #42994:
URL: https://github.com/apache/spark/pull/42994

   ### What changes were proposed in this pull request?
   
   This PR proposes to match `GroupBy.nth` behavior to the latest Pandas.
   
   ### Why are the changes needed?
   
   To match the behavior of Pandas 2.0.0 and above.
   
   ### Does this PR introduce _any_ user-facing change?
   **Test DataFrame**
   ```python
   >>> psdf = ps.DataFrame(
   ...     {
   ...         "A": [1, 2, 1, 2],
   ...         "B": [3.1, 4.1, 4.1, 3.1],
   ...         "C": ["a", "b", "b", "a"],
   ...         "D": [True, False, False, True],
   ...     }
   ... )
   >>> psdf
      A    B  C      D
   0  1  3.1  a   True
   1  2  4.1  b  False
   2  1  4.1  b  False
   3  2  3.1  a   True
   ```
   **Before fixing**
   ```python
   >>> psdf.groupby("A").nth(-1)
        B  C      D
   A
   1  4.1  b  False
   2  3.1  a   True
   >>> psdf.groupby("A")[["C"]].nth(-1)
      C
   A
   1  b
   2  a
   >>> psdf.groupby("A")["B"].nth(-1)
   A
   1    4.1
   2    3.1
   Name: B, dtype: float64
   ```
   **After fixing**
   ```python
   >>> psdf.groupby("A").nth(-1)
      A    B  C      D
   2  1  4.1  b  False
   3  2  3.1  a   True
   >>> psdf.groupby("A")[["C"]].nth(-1)
      C
   2  b
   3  a
   >>> psdf.groupby("A")["B"].nth(-1)
   2    4.1
   3    3.1
   Name: B, dtype: float64
   ```
   
   ### How was this patch tested?
   
   Enabling the existing tests & updating the doctests.
   
   
   ### Was this patch authored or co-authored using generative AI tooling?
   
   No.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #42994: [SPARK-43433][PS] Match `GroupBy.nth` behavior to the latest Pandas

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon closed pull request #42994: [SPARK-43433][PS] Match `GroupBy.nth` behavior to the latest Pandas
URL: https://github.com/apache/spark/pull/42994


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] itholic commented on pull request #42994: [SPARK-43433][PS] Match `GroupBy.nth` behavior to the latest Pandas

Posted by "itholic (via GitHub)" <gi...@apache.org>.
itholic commented on PR #42994:
URL: https://github.com/apache/spark/pull/42994#issuecomment-1726884937

   I don't see any failure on my local testing, but it complains on GitHub CI. Let me take a deeper look why this happens.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #42994: [SPARK-43433][PS] Match `GroupBy.nth` behavior to the latest Pandas

Posted by "HyukjinKwon (via GitHub)" <gi...@apache.org>.
HyukjinKwon commented on PR #42994:
URL: https://github.com/apache/spark/pull/42994#issuecomment-1730762760

   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] itholic commented on a diff in pull request #42994: [SPARK-43433][PS] Match `GroupBy.nth` behavior to the latest Pandas

Posted by "itholic (via GitHub)" <gi...@apache.org>.
itholic commented on code in PR #42994:
URL: https://github.com/apache/spark/pull/42994#discussion_r1331072003


##########
python/pyspark/pandas/groupby.py:
##########
@@ -1155,14 +1152,32 @@ def nth(self, n: int) -> FrameLike:
         else:
             sdf = sdf.select(*groupkey_names).distinct()
 
-        internal = internal.copy(
+        agg_columns = []
+        if not self._agg_columns_selected:

Review Comment:
   `groupkeys` is going to be a data column instead of index when it's not included by `agg_columns_selected`. And index should be kept without updating from Pandas 2.0.0 for `nth`. Therefore, we should include the `groupkeys` into agg_columns to make it as a data column and create a new `InternalFrame` manually.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org