You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@carbondata.apache.org by 喜之郎 <25...@qq.com> on 2018/05/07 02:58:53 UTC

回复:loading data from parquet table always

component version: 


carbondata version:1.3.1
spark:2.2.1




------------------ 原始邮件 ------------------
发件人: "251922566"<25...@qq.com>;
发送时间: 2018年5月7日(星期一) 上午10:41
收件人: "dev"<de...@carbondata.apache.org>;

主题: loading data from parquet table always



hi dev.
  I have a parquet table  and a carbon table. This table have 1 billion rows.
parquet table :
============
CREATE TABLE mc_idx3(
COL_1 integer,
COL_2  integer,
COL_3  string,
COL_4  integer,
COL_5  string,
COL_6  string,
COL_7   string,
COL_8   string,
COL_9   integer,
COL_10 long,
COL_11 string,
COL_12 string,
COL_13 string,
COL_14 string,
COL_15 integer,
COL_16 string,
COL_17 Timestamp ) 
STORED AS PARQUET;

==============
carbon table:
===============
CREATE TABLE mc_idxok_cd1(
COL_1 integer,
COL_2  integer,
COL_3  string,
COL_4  integer,
COL_5  string,
COL_6  string,
COL_7   string,
COL_8   string,
COL_9   integer,
COL_10 long,
COL_11 string,
COL_12 string,
COL_13 string,
COL_14 string,
COL_15 integer,
COL_16 string,
COL_17 Timestamp )
STORED BY 'carbondata'
TBLPROPERTIES (
'SORT_COLUMNS'='COL_17,COL_1');

=============
when I using insert into table mc_idxok_cd1 select * from mc_idx3.
It always failed. 
ERROR LOG:
org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException: There is an unexpected error: org.apache.carbondata.core.datastore.exception.CarbonDataWriterException: Problem while copying file from local store to carbon store
	at org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:123)
	at org.apache.carbondata.processing.loading.DataLoadExecutor.execute(DataLoadExecutor.java:51)
	at org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD$$anon$2.<init>(NewCarbonDataLoadRDD.scala:390)
	at org.apache.carbondata.spark.rdd.NewDataFrameLoaderRDD.internalCompute(NewCarbonDataLoadRDD.scala:353)
	at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:108)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException: org.apache.carbondata.core.datastore.exception.CarbonDataWriterException: Problem while copying file from local store to carbon store
	at org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.processingComplete(DataWriterProcessorStepImpl.java:162)
	at org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.finish(DataWriterProcessorStepImpl.java:148)
	at org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.execute(DataWriterProcessorStepImpl.java:112)





-------------------
can anybody give me some advice? Any advice is appreciated!