You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "David Cai (Jira)" <ji...@apache.org> on 2020/05/07 02:09:00 UTC

[jira] [Resolved] (CARBONDATA-3021) Streaming throw Unsupported data type exception

     [ https://issues.apache.org/jira/browse/CARBONDATA-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

David Cai resolved CARBONDATA-3021.
-----------------------------------
    Resolution: Fixed

> Streaming throw Unsupported data type exception
> -----------------------------------------------
>
>                 Key: CARBONDATA-3021
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-3021
>             Project: CarbonData
>          Issue Type: Bug
>    Affects Versions: 1.5.0
>            Reporter: David Cai
>            Priority: Major
>          Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:343)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
> Caused by: org.apache.carbondata.streaming.CarbonStreamException: Job failed to write data file
> 	at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1.apply$mcV$sp(CarbonAppendableStreamSink.scala:288)
> 	at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1.apply(CarbonAppendableStreamSink.scala:238)
> 	at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1.apply(CarbonAppendableStreamSink.scala:238)
> 	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
> 	at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$.writeDataFileJob(CarbonAppendableStreamSink.scala:238)
> 	at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink.addBatch(CarbonAppendableStreamSink.scala:133)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply$mcV$sp(StreamExecution.scala:666)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:666)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch$1.apply(StreamExecution.scala:666)
> 	at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatch(StreamExecution.scala:665)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(StreamExecution.scala:306)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294)
> 	at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamExecution.scala:294)
> 	at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
> 	at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:290)
> 	... 1 more
> Caused by: java.lang.IllegalArgumentException: Unsupported data type: LONG
> 	at org.apache.carbondata.core.util.comparator.Comparator.getComparatorByDataTypeForMeasure(Comparator.java:73)
> 	at org.apache.carbondata.streaming.segment.StreamSegment.mergeBatchMinMax(StreamSegment.java:471)
> 	at org.apache.carbondata.streaming.segment.StreamSegment.updateStreamFileIndex(StreamSegment.java:610)
> 	at org.apache.carbondata.streaming.segment.StreamSegment.updateIndexFile(StreamSegment.java:627)
> 	at org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileJob$1.apply$mcV$sp(CarbonAppendableStreamSink.scala:277)
> 	... 20 more



--
This message was sent by Atlassian Jira
(v8.3.4#803005)