You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "dhatchayani (JIRA)" <ji...@apache.org> on 2019/07/08 09:07:00 UTC

[jira] [Commented] (CARBONDATA-3451) Select aggregation query with filter fails on hive table with decimal type using CarbonHiveSerDe in Spark 2.1

    [ https://issues.apache.org/jira/browse/CARBONDATA-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16880140#comment-16880140 ] 

dhatchayani commented on CARBONDATA-3451:
-----------------------------------------

Please check this again. It is already fixed in [CARBONDATA-3441|https://issues.apache.org/jira/browse/CARBONDATA-3441]

> Select aggregation query with filter fails on hive table with decimal type using CarbonHiveSerDe in Spark 2.1
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-3451
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3451
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-query
>    Affects Versions: 1.6.0
>         Environment: Spark 2.1
>            Reporter: Chetan Bhat
>            Priority: Minor
>
> Test steps :
> In Spark 2.1 beeline user creates a carbon table and loads data.
>  create table Test_Boundary (c1_int int,c2_Bigint Bigint,c3_Decimal Decimal(38,38),c4_double double,c5_string string,c6_Timestamp Timestamp,c7_Datatype_Desc string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES('inverted_index'='c1_int,c2_Bigint,c5_string,c6_Timestamp','sort_columns'='c1_int,c2_Bigint,c5_string,c6_Timestamp');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/Test_Data1.csv' INTO table Test_Boundary OPTIONS('DELIMITER'=',','QUOTECHAR'='','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='');
> From hive beeline user creates a hive table from the already created carbon table using CarbonHiveSerDe.
> CREATE TABLE IF NOT EXISTS Test_Boundary1 (c1_int int,c2_Bigint Bigint,c3_Decimal Decimal(38,38),c4_double double,c5_string string,c6_Timestamp Timestamp,c7_Datatype_Desc string) ROW FORMAT SERDE 'org.apache.carbondata.hive.CarbonHiveSerDe' WITH SERDEPROPERTIES ('mapreduce.input.carboninputformat.databaseName'='default','mapreduce.input.carboninputformat.tableName'='Test_Boundary') STORED AS INPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonInputFormat' OUTPUTFORMAT 'org.apache.carbondata.hive.MapredCarbonOutputFormat' LOCATION 'hdfs://hacluster//user/hive/warehouse/carbon.store/default/test_boundary';
> User executes below select aggregation query on the hive table.
> select min(c3_Decimal),max(c3_Decimal),sum(c3_Decimal),avg(c3_Decimal) , count(c3_Decimal), variance(c3_Decimal) from test_boundary1 where exp(c1_int)=0.0 or exp(c1_int)=1.0;
> select min(c3_Decimal),max(c3_Decimal),sum(c3_Decimal),avg(c3_Decimal) , count(c3_Decimal), variance(c3_Decimal) from test_boundary1 where log(c1_int,1)=0.0 or log(c1_int,1) IS NULL;
> select min(c3_Decimal),max(c3_Decimal),sum(c3_Decimal),avg(c3_Decimal) , count(c3_Decimal), variance(c3_Decimal) from test_boundary1 where pmod(c1_int,1)=0 or pmod(c1_int,1)IS NULL;
>  
> Actual Result : Select aggregation query with filter fails on hive table with decimal type using CarbonHiveSerDe in Spark 2.1
> Expected Result : Select aggregation query with filter should be success on hive table with decimal type using CarbonHiveSerDe in Spark 2.1
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)