You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/10/24 07:19:11 UTC

[GitHub] [spark] wangyum commented on a diff in pull request #38047: [SPARK-40609][SQL] Casts types according to bucket info for Equality expressions

wangyum commented on code in PR #38047:
URL: https://github.com/apache/spark/pull/38047#discussion_r1002950567


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala:
##########
@@ -751,6 +753,49 @@ abstract class TypeCoercionBase {
     }
   }
 
+  /**
+   * Casts types according to bucket info for Equality expression.
+   */
+  object EqualityTypeCasts extends TypeCoercionRule {
+
+    private def isIntegralTypes(exprs: Seq[Expression]): Boolean = {
+      exprs.map(_.dataType).forall {
+        case _: IntegralType => true
+        case DecimalType.Fixed(_, 0) => true
+        case _ => false
+      }
+    }
+
+    override val transform: PartialFunction[Expression, Expression] = {
+      case b @ Equality(l: Attribute, r: Attribute)
+          if b.childrenResolved && l.dataType != r.dataType && isIntegralTypes(b.children) =>
+        // The result type is:
+        // 1. The attribute data type with larger bucket number.
+        // 2. The attribute data type with larger default size if it is not bucketed attribute.

Review Comment:
   Yes. adding cast to either side is correct for equality expressions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org