You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/12/05 22:35:08 UTC

[GitHub] [spark] bersprockets opened a new pull request, #38923: `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

bersprockets opened a new pull request, #38923:
URL: https://github.com/apache/spark/pull/38923

   ### What changes were proposed in this pull request?
   
   Change `InterpretedMutableProjection` to use `setDecimal` rather than `setNullAt` to set null values for high-precision decimals in unsafe rows.
   
   ### Why are the changes needed?
   
   The following returns the wrong answer:
   
   ```
   set spark.sql.codegen.wholeStage=false;
   set spark.sql.codegen.factoryMode=NO_CODEGEN;
   
   select max(col1), max(col2) from values
   (cast(null  as decimal(27,2)), cast(null   as decimal(27,2))),
   (cast(77.77 as decimal(27,2)), cast(245.00 as decimal(27,2)))
   as data(col1, col2);
   
   +---------+---------+
   |max(col1)|max(col2)|
   +---------+---------+
   |null     |239.88   |
   +---------+---------+
   ```
   This is because `InterpretedMutableProjection` inappropriately uses `InternalRow#setNullAt` on unsafe rows to set null for decimal types with precision > `Decimal.MAX_LONG_DIGITS`.
   
   When `setNullAt` is used, the pointer to the decimal's storage area in the variable length region gets zeroed out. Later, when `InterpretedMutableProjection` calls `setDecimal` on that field, `UnsafeRow#setDecimal` picks up the zero pointer and stores decimal data on top of the null-tracking bit set. Later updates to the null-tracking bit set (e.g., calls to `setNotNullAt`) further corrupt the decimal data (turning 245.00 into 239.88, for example). The stomping of the null-tracking bit set also can make non-null fields appear null (turning 77.77 into null, for example).
   
   This bug can manifest for end-users after codegen fallback (say, if an expression's generated code fails to compile).
   
   ### Does this PR introduce _any_ user-facing change?
   
   No.
   
   ### How was this patch tested?
   
   New unit tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #38923:
URL: https://github.com/apache/spark/pull/38923#issuecomment-1338732123

   cc @wangyum  FYI


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on a diff in pull request #38923: `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on code in PR #38923:
URL: https://github.com/apache/spark/pull/38923#discussion_r1040179271


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##########
@@ -75,13 +78,28 @@ class InterpretedMutableProjection(expressions: Seq[Expression]) extends Mutable
     if (!e.nullable) {
       (v: Any) => writer(mutableRow, v)
     } else {
-      (v: Any) => {
-        if (v == null) {
-          mutableRow.setNullAt(i)
-        } else {
-          writer(mutableRow, v)
-        }
+      val nullSafeWriter: (InternalRow, Any) => Unit = e.dataType match {
+        case DecimalType.Fixed(precision, _) if precision > Decimal.MAX_LONG_DIGITS =>

Review Comment:
   This won't handle the case where the decimal type is nested in a struct or array. But in that case, the target row won't be an unsafe row (since `InterpretedMutableProjection#target` would reject such a row, AFAICT), so the plain `writer` will be fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on a diff in pull request #38923: `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on code in PR #38923:
URL: https://github.com/apache/spark/pull/38923#discussion_r1040179271


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##########
@@ -75,13 +78,28 @@ class InterpretedMutableProjection(expressions: Seq[Expression]) extends Mutable
     if (!e.nullable) {
       (v: Any) => writer(mutableRow, v)
     } else {
-      (v: Any) => {
-        if (v == null) {
-          mutableRow.setNullAt(i)
-        } else {
-          writer(mutableRow, v)
-        }
+      val nullSafeWriter: (InternalRow, Any) => Unit = e.dataType match {
+        case DecimalType.Fixed(precision, _) if precision > Decimal.MAX_LONG_DIGITS =>

Review Comment:
   Although this won't handle the case where the decimal type is nested in a struct or array, that shouldn't be an issue, since in that situation the target row won't be an unsafe row (`InterpretedMutableProjection#target` would reject such a row, AFAICT). When the row is not an unsafe row, the plain `writer` should be fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] cloud-fan commented on a diff in pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
cloud-fan commented on code in PR #38923:
URL: https://github.com/apache/spark/pull/38923#discussion_r1045383454


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##########
@@ -72,7 +73,7 @@ class InterpretedMutableProjection(expressions: Seq[Expression]) extends Mutable
 
   private[this] val fieldWriters: Array[Any => Unit] = validExprs.map { case (e, i) =>
     val writer = InternalRow.getWriter(i, e.dataType)
-    if (!e.nullable) {
+    if (!e.nullable || e.dataType.isInstanceOf[DecimalType]) {

Review Comment:
   This is a good catch! It's better to add some code comments to explain it, or refactor the code to make codegen and interpreted code paths share some util functions to update an internal row.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on a diff in pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on code in PR #38923:
URL: https://github.com/apache/spark/pull/38923#discussion_r1045401210


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##########
@@ -72,7 +73,7 @@ class InterpretedMutableProjection(expressions: Seq[Expression]) extends Mutable
 
   private[this] val fieldWriters: Array[Any => Unit] = validExprs.map { case (e, i) =>
     val writer = InternalRow.getWriter(i, e.dataType)
-    if (!e.nullable) {
+    if (!e.nullable || e.dataType.isInstanceOf[DecimalType]) {

Review Comment:
   I can follow up, since calendar interval has the same problem (in the case of calendar interval, the issue exists in both `InterpretedMutableProjection` and `InterpretedUnsafeProjection`).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #38923:
URL: https://github.com/apache/spark/pull/38923#issuecomment-1344266661

   Merged to master.
   
   
   cc @rednaxelafx and @cloud-fan for a posthoc review when you find some time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
HyukjinKwon closed pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row
URL: https://github.com/apache/spark/pull/38923


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on a diff in pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on code in PR #38923:
URL: https://github.com/apache/spark/pull/38923#discussion_r1041454229


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##########
@@ -67,6 +69,7 @@ class InterpretedMutableProjection(expressions: Seq[Expression]) extends Mutable
         validExprs.map(_._1.dataType).filterNot(UnsafeRow.isMutable)
           .map(_.catalogString).mkString(", "))
     mutableRow = row
+    unsafeMutableRow = if (mutableRow.isInstanceOf[UnsafeRow]) true else false;

Review Comment:
   The generated code for `MutableProjection` doesn't seem to care about the type of the target row. It just calls `setDecimal` for decimal null values regardless of whether the target is an unsafe row or not:
   ```java
   /* 092 */     // copy all the results into MutableRow
   /* 093 */
   /* 094 */     if (!isNull_6) {
   /* 095 */       mutableRow.setDecimal(0, mutableStateArray_0[0], 27);
   /* 096 */     } else {
   /* 097 */       mutableRow.setDecimal(0, null, 27);
   /* 098 */     }
   
   ```
   So this code should probably also not care. I will make the change and test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on PR #38923:
URL: https://github.com/apache/spark/pull/38923#issuecomment-1345422304

   Thanks @HyukjinKwon @rednaxelafx 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on PR #38923:
URL: https://github.com/apache/spark/pull/38923#issuecomment-1339762795

   By the way, there's a similar-looking problem with type `CalendarInterval`:
   ```
   set spark.sql.codegen.wholeStage=false;
   set spark.sql.codegen.factoryMode=NO_CODEGEN;
   
   select first(col1), last(col2) from values
   (make_interval(0, 0, 0, 7, 0, 0, 0), make_interval(17, 0, 0, 2, 0, 0, 0))
   as data(col1, col2);
   
   +---------------+---------------+
   |first(col1)    |last(col2)     |
   +---------------+---------------+
   |16 years 2 days|16 years 2 days|
   +---------------+---------------+
   ```
   In this case, however, the bug doesn't appear to be in `InterpretedMutableProjection`, but in the way the unsafe buffer is initialized, so I will address it separately.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] rednaxelafx commented on pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
rednaxelafx commented on PR #38923:
URL: https://github.com/apache/spark/pull/38923#issuecomment-1344558867

   Post-hoc review: LGTM, this is a good catch. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] bersprockets commented on a diff in pull request #38923: [SPARK-41395][SQL] `InterpretedMutableProjection` should use `setDecimal` to set null values for high-precision decimals in an unsafe row

Posted by GitBox <gi...@apache.org>.
bersprockets commented on code in PR #38923:
URL: https://github.com/apache/spark/pull/38923#discussion_r1041454229


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/InterpretedMutableProjection.scala:
##########
@@ -67,6 +69,7 @@ class InterpretedMutableProjection(expressions: Seq[Expression]) extends Mutable
         validExprs.map(_._1.dataType).filterNot(UnsafeRow.isMutable)
           .map(_.catalogString).mkString(", "))
     mutableRow = row
+    unsafeMutableRow = if (mutableRow.isInstanceOf[UnsafeRow]) true else false;

Review Comment:
   The generated code for `MutableProjection` doesn't seem to care about the type of the target row. It just calls `setDecimal` for decimal null values regardless of whether the target is an unsafe row or not:
   ```java
   /* 092 */     // copy all the results into MutableRow
   /* 093 */
   /* 094 */     if (!isNull_6) {
   /* 095 */       mutableRow.setDecimal(0, mutableStateArray_0[0], 27);
   /* 096 */     } else {
   /* 097 */       mutableRow.setDecimal(0, null, 27);
   /* 098 */     }
   
   ```
   So this code should also not care (probably). I will make the change and test.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org