You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Bruce Robbins (Jira)" <ji...@apache.org> on 2022/12/15 21:15:00 UTC

[jira] [Created] (SPARK-41535) InterpretedUnsafeProjection and InterpretedMutableProjection can corrupt unsafe buffer when used with calendar interval data

Bruce Robbins created SPARK-41535:
-------------------------------------

             Summary: InterpretedUnsafeProjection and InterpretedMutableProjection can corrupt unsafe buffer when used with calendar interval data
                 Key: SPARK-41535
                 URL: https://issues.apache.org/jira/browse/SPARK-41535
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 3.4.0
            Reporter: Bruce Robbins


This returns the wrong answer:
{noformat}
set spark.sql.codegen.wholeStage=false;
set spark.sql.codegen.factoryMode=NO_CODEGEN;

select first(col1), last(col2) from values
(make_interval(0, 0, 0, 7, 0, 0, 0), make_interval(17, 0, 0, 2, 0, 0, 0))
as data(col1, col2);

+---------------+---------------+
|first(col1)    |last(col2)     |
+---------------+---------------+
|16 years 2 days|16 years 2 days|
+---------------+---------------+
{noformat}
In the above case, {{TungstenAggregationIterator}} uses {{InterpretedUnsafeProjection}} to create the aggregation buffer and then initializes all the fields to null. {{InterpretedUnsafeProjection}} incorrectly calls {{UnsafeRowWriter#setNullAt}}, rather than {{unsafeRowWriter#write}}, for the two calendar interval fields. As a result, the writer never allocates memory from the variable length region for the two decimals, and the pointers in the fixed region get left as zero. Later, when {{InterpretedMutableProjection}} attempts to update the first field, {{UnsafeRow#setInterval}} picks up the zero pointer and stores interval data on top of the null-tracking bit set. The call to UnsafeRow#setInterval for the second field also stomps the null-tracking bit set. Later updates to the null-tracking bit set (e.g., calls to setNotNullAt) further corrupt the interval data, turning {{interval 7 years 2 days}} into {{interval 16 years 2 days}}.

Even if you fix the above bug to {{InterpretedUnsafeProjection}} so that the buffer is created correctly, {{InterpretedMutableProjection}} has a similar bug to SPARK-41395, except this time for calendar interval data:
{noformat}
set spark.sql.codegen.wholeStage=false;
set spark.sql.codegen.factoryMode=NO_CODEGEN;

select first(col1), last(col2), max(col3) from values
(null, null, 1),
(make_interval(0, 0, 0, 7, 0, 0, 0), make_interval(17, 0, 0, 2, 0, 0, 0), 3)
as data(col1, col2, col3);

+---------------+---------------+---------+
|first(col1)    |last(col2)     |max(col3)|
+---------------+---------------+---------+
|16 years 2 days|16 years 2 days|3        |
+---------------+---------------+---------+
{noformat}
These two bugs could get exercised during codegen fallback. Take for example this case where I forced codegen to fail for the Greatest expression:
{noformat}
spark-sql> select first(col1), last(col2), max(col3) from values
(null, null, 1),
(make_interval(0, 0, 0, 7, 0, 0, 0), make_interval(17, 0, 0, 2, 0, 0, 0), 3)
as data(col1, col2, col3);

22/12/15 13:06:23 ERROR CodeGenerator: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 70, Column 1: ';' expected instead of 'if'
...
22/12/15 13:06:24 WARN MutableProjection: Expr codegen error and falling back to interpreter mode
java.util.concurrent.ExecutionException: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 78, Column 1: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 78, Column 1: ';' expected instead of 'boolean'
...
16 years 2 days	16 years 2 days	3
Time taken: 5.852 seconds, Fetched 1 row(s)
spark-sql> 
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org