You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by GitBox <gi...@apache.org> on 2021/06/22 07:41:18 UTC

[GitHub] [drill] Leon-WTF edited a comment on pull request #2259: DRILL-7931: Rowtype mismatch in DrillReduceAggregatesRule

Leon-WTF edited a comment on pull request #2259:
URL: https://github.com/apache/drill/pull/2259#issuecomment-864843790


   Thanks paul for your detailed note. I reviewed the code more deeper, and below are my discoveries for your questions, please correct me if something is wrong.
   1. There two sum functions: SqlSumAggFunction and SqlSumEmptyIsZeroAggFunction, from TypeInferenceUtils.DrillSumSqlReturnTypeInference, we can find that the return type of SqlSumAggFunction is non-nullable only when there is group-py and input type is non-nullable, otherwise it is nullable(Don't know why?). SqlSumAggFunction can be convert to SqlSumEmptyIsZeroAggFunction when return type is non-nullable or to "case count(c) when 0 then null else sum0(c) end" when return type is nullable. The return type of SqlSumEmptyIsZeroAggFunction should always be non-nullable.
   2. In RexBuilder.addAggCall, we can find that it uses the equals and hashCode methods of AggregateCall to decide if it already exists in aggCallMapping, but the return type property is not checked in the methods. That's why the code finds the nullable sum operator and throws the non-nullable one. 
   3. When we call RelOptRuleCall.transformTo, it will do the rowtype equality check between the new and old RelNode, that's where the exception is thrown. The return type of "bare" sum operation is non-nullable because there is group-py and the input type is non-nullable. The return type of the sum function from stddev is nullable because it is changed to nullable at line 516 of DrillReduceAggregatesRule(Don't know why?)
   
   I made some new modifications based on above discoveries, and also I think we should use DrillCalciteSqlAggFunctionWrapper consistently for calcite's SqlAggFunction to avoid unequality of the class type when compare AggregateCall.
   
   We have two choses to fix the bug now: one solution as my previouse modification is to compare the return type of AggregateCall when checking if it is already existing, this will result in having two same functions just different with return type which actually can be combined, but this solution will have less side effect. Another solution as my current modification is to make sum0 operation's return type always be false, which one do you prefer?
   
   In addition, sum returns empty data with group by, otherwise returns null when there is no rows,
   
   > Thanks for finding and fixing this bug. It is a very difficult one. This area of code is complex.
   > 
   > Revision to my earlier comment: I had assumed that the problem was in the run time part of the code, since that is where this stuff usually fails. However, I double-checked the code that was changed. Turns out the fix is in the _planner_ not the runtime. So, most of the stuff I say below may not actually relevant to this bug. (Going to leave it anyway as background.)
   > 
   > My knowledge of the Calcite planner is not as deep as it should be so I can't tell if the fix is correct. I'm inclined to approve the fix because it works. Also, a deeper investigation would require quite a bit of work.
   > 
   > What I find confusing is that the type conflict can _only_ occur at runtime since Drill is unique in that the planner does not know about data types (that is what "schema on read" is all about.) So, why would the planner do something that would cause the runtime (which does know about types) to throw an error? And, why are the aggregate accumulators nullable (see below)?
   > 
   > _Original note, somewhat edited:_
   > 
   > The description suggests that the code is attempting to combine the sum operation for `stddev()` with the sum operation for `sum()`. I believe that doing so is a bug unless there was some specific optimization attempt to share common expressions. Normally the planner would perform this kind of optimization. The trick, for Drill, is that the planner does not know the data type, so it must be the runtime that handles such details. I rather doubt that the original developers had gotten to the level of optimization in code generation where they attempted to share subexpressions.
   > 
   > However, since the fix is in the planner, I wonder if Calcite is attempting to do what it would normally do (combine subexpressions), but doing so is wrong for Drill because we don't yet know the types? The only thing we do know is that if two expressions both work on column `c`, then their types are the same, even if the planner does not know what the actual type will turn out to be.
   > 
   > Then there is the issue of the nullable `bigint` data type. According to the [SQL Server docs](https://docs.microsoft.com/en-us/sql/t-sql/functions/stdev-transact-sql?view=sql-server-ver15), `stddev` will ignore `NULL` values and will return `NULL` only if no non-`NULL` rows are present. Probably other engines work the same way. The [Drill docs](http://drill.apache.org/docs/aggregate-and-aggregate-statistical/) do not say how `NULL`s are handled, but the original developers mostly followed [Postgres](https://www.postgresql.org/docs/9.1/functions-aggregate.html), among others. Postgres also uses the "ignore NULL" behavior.
   > 
   > Given this, there should _never_ be a nullable anything in the accumulators, even if the input type _is_ nullable. Instead, to implement the "`NULL` if no values" behavior, the "finalize" step (the one that computes the final output value), should return a nullable value if the count is zero. Note that Postgres returns `NULL` for `sum()` when there are no rows instead of 0. This is odd, I'm not sure if that is what Drill does. (All these details should be in the Drill docs, but are not. If we figure out how this works in Drill, we should file a Doc. JIRA ticket to get the information added.)
   > 
   > Next, consider how the accumulators are created. Remember that Drill does not know the data type until the first row arrives. At that point, Drill has a data type and can set up the needed accumulators and generate the code for the operations. My understanding is that the accumulators and code will remain in place for the rest of the query -- unless the operation receives an `OK_NEW_SCHEMA` (schema change) from the upstream operator. At that point, Drill is supposed to handle the change. (A `sum(c)` first saw `c` as `INT`, but later saw it as `DOUBLE`, say.) I doubt if this actually works: we'd have to somehow compute the common type of the old and new types, convert the accumulators, etc. Maybe it works, but I'd not bet on it. (Most schema change events are not handled in most operators -- they are an intractable problem for which there is no good _a priori_ solution. It is an ongoing embarrassment that Drill promises to do things which it cannot do, even in principle.)
   > 
   > Let's look at the error message to see what that tells us. Query:
   > 
   > ```sql
   > select col1, stddev(col2), SUM(col2)
   > FROM (values ('UA', 3), ('USA', 2), ('UA', 3), ('USA', 5), ('USA', 1), ('UA', 9)) x(col1, col2)
   > GROUP BY col1;
   > ```
   > 
   > Since the data is from a `VALUES` clause, the schema does not change, so we can rule out that as a possible cause. Instead, the problem is probably a bug within Drill itself.
   > 
   > ```
   > rowtype of new rel:
   > RecordType(CHAR(3) NOT NULL col1, BIGINT $f1, BIGINT $f2, BIGINT NOT NULL $f3, BIGINT $f4) NOT NULL
   > rowtype of set:
   > RecordType(CHAR(3) NOT NULL col1, BIGINT $f1, BIGINT $f2, BIGINT NOT NULL $f3, BIGINT NOT NULL $f4) NOT NULL
   > ```
   > 
   > We see that the `$f1` is not a very descriptive name. It probably means "function 1". The pull request description suggests that:
   > 
   > * `$f1` is `STDDEV.sum(col2)`
   > * `$f2` is `STDDEV.sum(col2*col2)`
   > * `$f3` is `STDDEV.count(col2)`
   > * `$f4` is `SUM(col2)`
   > 
   > Where `STDDEV.sum` is something I just made up to mean "the `sum()` function that implements `STDDEV()`". We can immediately notice some issues:
   > 
   > * _None_ of the functions should be nullable, all should be `NOT NULL` because of the way that SQL handles `NULL`s: they are ignored, so we only accumulate non-null values. The `NULL`ness will be added to the output column.
   > * So, the "new rel" is less correct than the "rowtype of set", even though both are wrong.
   > 
   > So, where does this leave us? Again, if the planner fix works, then the result is less broken than the current code, which is good.
   > 
   > On the other hand, longer term, the above suggests that we should check:
   > 
   > * Why did code generation decide to create a nullable accumulator? According to the rules of SQL, discussed above, the accumulators should all be non-nullable.
   > * How does the code find the accumulator? Each should have a unique name, derived, somehow, from the input expression. The `$f1` symbols are internal names. They clearly do not identify the function in any detail. If we use them as a key, we have to ensure that we do the same mapping from expression to `$fn` each time. Otherwise, the names should be more unique such as (making something up)  `$STDDEV$SUM$C`. The question is: are the names getting confused, or is the problem elsewhere?
   > * If the planner does decide to combine the `sum()` for `stddev()` with the "bare" `sum()` operation, why is that a problem, since they both use the same column `c`, and thus the two have the same input data type?
   > 
   > Again, thanks for looking into this; this is a very complex part of the code, so we have to work carefully.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org