You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by we...@apache.org on 2022/07/05 05:54:36 UTC
[spark] branch master updated: [SPARK-39453][SQL][TESTS][FOLLOWUP] Let `RAND` in filter is more meaningful
This is an automated email from the ASF dual-hosted git repository.
wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new 3c9b296928a [SPARK-39453][SQL][TESTS][FOLLOWUP] Let `RAND` in filter is more meaningful
3c9b296928a is described below
commit 3c9b296928a9accac0c330ce4bcd8da1dd05850d
Author: Jiaan Geng <be...@163.com>
AuthorDate: Tue Jul 5 13:54:20 2022 +0800
[SPARK-39453][SQL][TESTS][FOLLOWUP] Let `RAND` in filter is more meaningful
### What changes were proposed in this pull request?
https://github.com/apache/spark/pull/36830 makes DS V2 supports push down misc non-aggregate functions(non ANSI).
But he `Rand` in test case looks no meaningful.
### Why are the changes needed?
Let `Rand` in filter is more meaningful.
### Does this PR introduce _any_ user-facing change?
'No'.
Just update test case.
### How was this patch tested?
Just update test case.
Closes #37033 from beliefer/SPARK-39453_followup.
Authored-by: Jiaan Geng <be...@163.com>
Signed-off-by: Wenchen Fan <we...@databricks.com>
---
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala b/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala
index 1cc5f87e5fc..108348fbcd3 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCV2Suite.scala
@@ -855,11 +855,11 @@ class JDBCV2Suite extends QueryTest with SharedSparkSession with ExplainSuiteHel
val df11 = sql(
"""
|SELECT * FROM h2.test.employee
- |WHERE GREATEST(bonus, 1100) > 1200 AND LEAST(salary, 10000) > 9000 AND RAND(1) < 1
+ |WHERE GREATEST(bonus, 1100) > 1200 AND RAND(1) < bonus
|""".stripMargin)
checkFiltersRemoved(df11)
checkPushedInfo(df11, "PushedFilters: " +
- "[(GREATEST(BONUS, 1100.0)) > 1200.0, (LEAST(SALARY, 10000.00)) > 9000.00, RAND(1) < 1.0]")
+ "[BONUS IS NOT NULL, (GREATEST(BONUS, 1100.0)) > 1200.0, RAND(1) < BONUS]")
checkAnswer(df11, Row(2, "david", 10000, 1300, true))
val df12 = sql(
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org