You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "slim bouguerra (JIRA)" <ji...@apache.org> on 2018/05/09 03:21:00 UTC

[jira] [Created] (HIVE-19474) Decimal type should be casted as part of the CTAS or INSERT Clause.

slim bouguerra created HIVE-19474:
-------------------------------------

             Summary: Decimal type should be casted as part of the CTAS or INSERT Clause.
                 Key: HIVE-19474
                 URL: https://issues.apache.org/jira/browse/HIVE-19474
             Project: Hive
          Issue Type: Bug
            Reporter: slim bouguerra
            Assignee: slim bouguerra


HIVE-18569  introduced a runtime config variable to allow the indexing of Decimal as Double, this leads to kind of messy state, Hive metadata think the column is still decimal while it is stored as double. Since the Hive metadata of the column is Decimal the logical optimizer will not push down aggregates. i tried to fix this by adding some logic to the application but it makes the code very clumsy with lot of branches. Instead i propose to revert this patch and let the user introduce an explicit cast this will be better since the metada reflects actual storage type and push down aggregates will kick in and there is no config needed.

cc [~ashutoshc] and [~nishantbangarwa]

You can see the difference with the following DDL

{code}

create table test_base_table(`timecolumn` timestamp, `interval_marker` string, `num_l` DECIMAL(10,2));
insert into test_base_table values ('2015-03-08 00:00:00', 'i1-start', 4.5);
set hive.druid.approx.result=true;
CREATE TABLE druid_test_table
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ("druid.segment.granularity" = "DAY")
AS
select cast(`timecolumn` as timestamp with local time zone) as `__time`, `interval_marker`, cast(`num_l` as double)
FROM test_base_table;
describe druid_test_table;
explain select sum(num_l), min(num_l) FROM druid_test_table;
CREATE TABLE druid_test_table_2
STORED BY 'org.apache.hadoop.hive.druid.DruidStorageHandler'
TBLPROPERTIES ("druid.segment.granularity" = "DAY")
AS
select cast(`timecolumn` as timestamp with local time zone) as `__time`, `interval_marker`, `num_l`
FROM test_base_table;
describe druid_test_table_2;
explain select sum(num_l), min(num_l) FROM druid_test_table_2;

{code}

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)