You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "manoj mathpal (JIRA)" <ji...@apache.org> on 2017/05/02 12:19:04 UTC
[jira] [Created] (CARBONDATA-1012) Decimal value is not supported
when we select the Query with integer data type.
manoj mathpal created CARBONDATA-1012:
-----------------------------------------
Summary: Decimal value is not supported when we select the Query with integer data type.
Key: CARBONDATA-1012
URL: https://issues.apache.org/jira/browse/CARBONDATA-1012
Project: CarbonData
Issue Type: Bug
Components: sql
Affects Versions: 1.1.0
Environment: SPARK 2.1
Reporter: manoj mathpal
Attachments: Test_Data1.csv, Test_Data1_h1.csv
Steps to reproduces:
1. Create table:
create table Test_Boundary_h3 (c1_int int,c2_Bigint Bigint,c3_Decimal Decimal(38,30),c4_double double,c5_string string,c6_Timestamp Timestamp,c7_Datatype_Desc string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ;
2.Load data into table:
load data local inpath '/home/manoj/Downloads/TestData/Data/Test_Data1_h1.csv' OVERWRITE INTO TABLE Test_Boundary_h3 ;
Execute Query:
select c1_int from Test_Boundary_h3 where c1_int in (2.147483647E9,2345.0,1234.0) ;
Result:
+-------------+--+
| c1_int |
+-------------+--+
| 1234 |
| 2345 |
| 2147483647 |
| 2147483647 |
+-------------+--+
.............................................................................................................................
Create table in Carbondata..
1.Create table:
create table Test_Boundary (c1_int int,c2_Bigint Bigint,c3_Decimal Decimal(38,30),c4_double double,c5_string string,c6_Timestamp Timestamp,c7_Datatype_Desc string) STORED BY 'org.apache.carbondata.format' ;
2.Load data into table:
LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/Test_Data1.csv' INTO table Test_Boundary OPTIONS('DELIMITER'=',','QUOTECHAR'='','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='') ;
3:Execute Query:
select c1_int from Test_Boundary where c1_int in (2.147483647E9,2345.0,1234.0) ;
Result:
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 96.0 failed 1 times, most recent failure: Lost task 0.0 in stage 96.0 (TID 302, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: java.util.concurrent.ExecutionException: org.apache.carbondata.core.scan.expression.exception.FilterUnsupportedException
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:105)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace: (state=,code=0)
Mismatched behavior in HIve and carbondata.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)