You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@calcite.apache.org by "Liao Xintao (JIRA)" <ji...@apache.org> on 2017/11/08 07:41:00 UTC
[jira] [Created] (CALCITE-2042) Compatible types of columns
combined via set operators may need explicit type-casting conversion
Liao Xintao created CALCITE-2042:
------------------------------------
Summary: Compatible types of columns combined via set operators may need explicit type-casting conversion
Key: CALCITE-2042
URL: https://issues.apache.org/jira/browse/CALCITE-2042
Project: Calcite
Issue Type: Bug
Reporter: Liao Xintao
Assignee: Julian Hyde
Priority: Minor
As knowns that columns combined via set operators (such as UNION, EXCEPT and INTERSECT) need not have exactly the same data type. So the SQL query is likely as below:
{code:sql}
select sum(bonus) from (
(select sal as bonus from emp where empno < 100)
UNION ALL
(select empno, sal * 0.5 as bonus from emp where empno >= 100)
)
{code}
In this case, the data types of two "bonus" column are INTEGER and DECIMAL(12, 1) respectively, and the UNION operator derives DECIMAL(12, 1) as return type. The plan seems so good till I run it on Flink. The Flink UnionOperator validates the data types of two inputs and throws an exception because the two input types are not exactly the same.
The underling systems based on calcite, like Flink, may adapt themselves to this kind of plan by modifying some codes. In my opinion, however, it may be a better way that the plan ensures the rules of data types of results of aggregation, (maybe) via rewriting the plan or adding explicit type-casting conversion which has been done when converting <case expression>.
Any feedback or advice is greatly appreciated.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)