You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Neha Bhardwaj (JIRA)" <ji...@apache.org> on 2017/05/02 11:44:04 UTC

[jira] [Created] (CARBONDATA-1011) select * doesn't work after adding column of date type

Neha Bhardwaj created CARBONDATA-1011:
-----------------------------------------

             Summary: select * doesn't work after adding column of date type
                 Key: CARBONDATA-1011
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1011
             Project: CarbonData
          Issue Type: Bug
          Components: data-query
         Environment: spark 2.1
            Reporter: Neha Bhardwaj
            Priority: Minor
         Attachments: 3000_UniqData.csv

Select * from <tablename> doesn't work , after a new column of date type(with default value) has been added to the table.


Steps to reproduce : 

CREATE TABLE uniqdata2 (CUST_ID int,CUST_NAME String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ("TABLE_BLOCKSIZE"= "256 MB");

LOAD DATA INPATH 'hdfs://localhost:54310/Files/3000_UniqData.csv' into table uniqdata2 OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');

Query before alter : 
select * from uniqdata2;                           //Works fine

alter table uniqdata2 add columns(date1 date) TBLPROPERTIES('DEFAULT.VALUE.date1'= '2017-01-01');

Query after alter : 
select * from uniqdata2;

Expected Output - Display all the data of the table.

Actual Output - 
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 201.0 failed 4 times, most recent failure: Lost task 0.3 in stage 201.0 (TID 402, 192.168.1.7, executor 0): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
	at org.apache.carbondata.core.scan.collector.impl.RestructureBasedVectorResultCollector.fillDirectDictionaryData(RestructureBasedVectorResultCollector.java:167)
	at org.apache.carbondata.core.scan.collector.impl.RestructureBasedVectorResultCollector.fillDataForNonExistingDimensions(RestructureBasedVectorResultCollector.java:130)
	at org.apache.carbondata.core.scan.collector.impl.RestructureBasedVectorResultCollector.collectVectorBatch(RestructureBasedVectorResultCollector.java:112)
	at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.processNextBatch(DataBlockIteratorImpl.java:65)
	at org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:46)
	at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:251)
	at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:141)
	at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:221)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
	at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Driver stacktrace: (state=,code=0)






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)