You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Chetan Bhat (JIRA)" <ji...@apache.org> on 2018/08/23 07:05:00 UTC

[jira] [Assigned] (CARBONDATA-2865) Alter table compact throwing error for old carbon partition table

     [ https://issues.apache.org/jira/browse/CARBONDATA-2865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chetan Bhat reassigned CARBONDATA-2865:
---------------------------------------

    Assignee: Kunal Kapoor

> Alter table compact throwing error for old carbon partition table
> -----------------------------------------------------------------
>
>                 Key: CARBONDATA-2865
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2865
>             Project: CarbonData
>          Issue Type: Bug
>            Reporter: Rahul Singha
>            Assignee: Kunal Kapoor
>            Priority: Minor
>
> _*Steps:*_
> |
> |*In 1.3.1*,
> 1.CREATE TABLE uniqdata_part(CUST_ID int,CUST_NAME String,DOB timestamp,DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10),DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 int) PARTITIONED BY(ACTIVE_EMUI_VERSION string) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES ('TABLE_BLOCKSIZE'= '256 MB');
> 2.LOAD DATA INPATH 'hdfs://hacluster/user/rahul/2000_UniqData.csv' into table uniqdata_part partition(active_emui_version='xyz') OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> *In 1.4.1,*
> 3.refresh table uniqdata_part;
> 4.LOAD DATA INPATH 'hdfs://hacluster/user/rahul/2000_UniqData.csv' into table uniqdata_part OPTIONS('DELIMITER'=',' , 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
> 5.alter table uniqdata_part compact 'major';
> _*Expected Result:*_Compaction should be success.
> _*Actual Result:*_ 
> Error: org.apache.spark.sql.AnalysisException: Compaction failed. Please check logs for more info. Exception in compaction Job aborted due to stage failure: Task 0 in stage 56.0 failed 4 times, most recent failure: Lost task 0.3 in stage 56.0 (TID 8129, BLR1000025192, executor 78): java.lang.StackOverflowError
>             at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:153)
>             at org.apache.spark.util.ByteBufferInputStream.read(ByteBufferInputStream.scala:51)
>             at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2627)
>             at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2636)
>             at java.io.ObjectInputStream$BlockDataInputStream.readBlockHeader(ObjectInputStream.java:2798)
>             at java.io.ObjectInputStream$BlockDataInputStream.refill(ObjectInputStream.java:2862)
>             at java.io.ObjectInputStream$BlockDataInputStream.read(ObjectInputStream.java:2934)
>             at java.io.DataInputStream.readInt(DataInputStream.java:387)
>             at java.io.ObjectInputStream$BlockDataInputStream.readInt(ObjectInputStream.java:3139)
>             at java.io.ObjectInputStream.readInt(ObjectInputStream.java:1023)
>             at java.util.ArrayList.readObject(ArrayList.java:782)
>             at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>             at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>             at java.lang.reflect.Method.invoke(Method.java:498)
>             at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
>             at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2136)
>             at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>             at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>             at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
>             at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>             at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>             at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>             at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
>             at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>             at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>             at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>             at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
>             at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
>             at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
>             at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
>             at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
>             at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)|
> |



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)