You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "zhengruifeng (via GitHub)" <gi...@apache.org> on 2024/02/27 09:11:59 UTC

[PR] [SPARK-47184][PYTHON][CONNECT][TESTS] Make `test_repartitionByRange_dataframe` reusable [spark]

zhengruifeng opened a new pull request, #45281:
URL: https://github.com/apache/spark/pull/45281

   ### What changes were proposed in this pull request?
   Make `test_repartitionByRange_dataframe` reusable
   
   ### Why are the changes needed?
   to make it reusable in Spark Connect
   
   
   ### Does this PR introduce _any_ user-facing change?
   no, test-only
   
   
   ### How was this patch tested?
   updated ut
   
   
   ### Was this patch authored or co-authored using generative AI tooling?
   no


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47184][PYTHON][CONNECT][TESTS] Make `test_repartitionByRange_dataframe` reusable [spark]

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng commented on code in PR #45281:
URL: https://github.com/apache/spark/pull/45281#discussion_r1503891947


##########
python/pyspark/sql/tests/test_dataframe.py:
##########
@@ -483,20 +483,21 @@ def test_repartitionByRange_dataframe(self):
 
         # test repartitionByRange(numPartitions, *cols)
         df3 = df1.repartitionByRange(2, "name", "age")
-        self.assertEqual(df3.rdd.getNumPartitions(), 2)
-        self.assertEqual(df3.rdd.first(), df2.rdd.first())
-        self.assertEqual(df3.rdd.take(3), df2.rdd.take(3))
+
+        self.assertEqual(df3.select(spark_partition_id()).distinct().count(), 2)

Review Comment:
   `df.select(spark_partition_id()).distinct().count()` is not always equivalent to `df.rdd. getNumPartitions()` (e.g. empty partitions, AQE rules like CoalesceShufflePartitions), but in this UT they are the same



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47184][PYTHON][CONNECT][TESTS] Make `test_repartitionByRange_dataframe` reusable [spark]

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng closed pull request #45281: [SPARK-47184][PYTHON][CONNECT][TESTS] Make `test_repartitionByRange_dataframe` reusable
URL: https://github.com/apache/spark/pull/45281


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47184][PYTHON][CONNECT][TESTS] Make `test_repartitionByRange_dataframe` reusable [spark]

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng commented on code in PR #45281:
URL: https://github.com/apache/spark/pull/45281#discussion_r1503891947


##########
python/pyspark/sql/tests/test_dataframe.py:
##########
@@ -483,20 +483,21 @@ def test_repartitionByRange_dataframe(self):
 
         # test repartitionByRange(numPartitions, *cols)
         df3 = df1.repartitionByRange(2, "name", "age")
-        self.assertEqual(df3.rdd.getNumPartitions(), 2)
-        self.assertEqual(df3.rdd.first(), df2.rdd.first())
-        self.assertEqual(df3.rdd.take(3), df2.rdd.take(3))
+
+        self.assertEqual(df3.select(spark_partition_id()).distinct().count(), 2)

Review Comment:
   `df.select(spark_partition_id()).distinct().count()` is not always equivalent to `df.rdd. getNumPartitions()` (e.g. empty partitions, AQE rules), but in this UT they are the same



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Re: [PR] [SPARK-47184][PYTHON][CONNECT][TESTS] Make `test_repartitionByRange_dataframe` reusable [spark]

Posted by "zhengruifeng (via GitHub)" <gi...@apache.org>.
zhengruifeng commented on PR #45281:
URL: https://github.com/apache/spark/pull/45281#issuecomment-1966216321

   merged to master


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org