You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "cloud-fan (via GitHub)" <gi...@apache.org> on 2023/09/27 07:07:48 UTC

[GitHub] [spark] cloud-fan commented on a diff in pull request #42828: [SPARK-45088][PYTHON][CONNECT] Make `getitem` work with duplicated columns

cloud-fan commented on code in PR #42828:
URL: https://github.com/apache/spark/pull/42828#discussion_r1338141627


##########
python/pyspark/sql/tests/test_dataframe.py:
##########
@@ -63,6 +62,51 @@
 
 
 class DataFrameTestsMixin:
+    def test_getitem_invalid_indices(self):
+        df = self.spark.sql(
+            "SELECT * FROM VALUES "
+            "(1, 1.1, 'a'), "
+            "(2, 2.2, 'b'), "
+            "(4, 4.4, 'c') "
+            "AS TAB(a, b, c)"
+        )
+
+        # accepted type and values
+        for index in [False, True, 0, 1, 2, -1, -2, -3]:
+            df[index]

Review Comment:
   This is really a bad API. `df.col` can be ambiguous as people may use the column reference far away from the dataframe, e.g. `df1.join(df2).select...filter...select(df1.col)`. We recommend users use qualified unresolved column instead, like `col("t1.col")`. Now `df[index]` is even worse as it only makes sense to use it intermediately in current df's transformation.
   
   Why do we add such an API? To support order by ordinal, we can just order by integer literals. The SQL parser also parses `ORDER BY 1, 2` as ordering by integer literal 1 and 2, and analyzer will properly resolve it.
   
   cc @HyukjinKwon @zhengruifeng 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org