You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "QiangCai (JIRA)" <ji...@apache.org> on 2018/02/09 02:03:00 UTC

[jira] [Created] (CARBONDATA-2151) Filter query on Timestamp/Date column of streaming table throwing exception

QiangCai created CARBONDATA-2151:
------------------------------------

             Summary: Filter query on Timestamp/Date column of streaming table throwing exception
                 Key: CARBONDATA-2151
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2151
             Project: CarbonData
          Issue Type: Bug
            Reporter: QiangCai


at org.apache.carbondata.hadoop.streaming.CarbonStreamRecordReader.scanBlockletAndFillVector(CarbonStreamRecordReader.java:435)
 at org.apache.carbondata.hadoop.streaming.CarbonStreamRecordReader.nextColumnarBatch(CarbonStreamRecordReader.java:324)
 at org.apache.carbondata.hadoop.streaming.CarbonStreamRecordReader.nextKeyValue(CarbonStreamRecordReader.java:305)
 at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:382)
 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
 at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377)
 at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:231)
 at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
 at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
 at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:826)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
 at org.apache.spark.scheduler.Task.run(Task.scala:99)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.carbondata.core.scan.expression.exception.FilterUnsupportedException: 
 at org.apache.carbondata.core.scan.filter.executer.RowLevelFilterExecuterImpl.applyFilter(RowLevelFilterExecuterImpl.java:280)
 at org.apache.carbondata.core.scan.filter.executer.AndFilterExecuterImpl.applyFilter(AndFilterExecuterImpl.java:56)
 at org.apache.carbondata.hadoop.streaming.CarbonStreamRecordReader.scanBlockletAndFillVector(CarbonStreamRecordReader.java:430)
 ... 20 more
Caused by: org.apache.carbondata.core.scan.expression.exception.FilterIllegalMemberException: Cannot convertTIMESTAMP to Time type value
 at org.apache.carbondata.core.scan.expression.ExpressionResult.getTime(ExpressionResult.java:387)
 at org.apache.carbondata.core.scan.expression.conditional.GreaterThanEqualToExpression.evaluate(GreaterThanEqualToExpression.java:64)
 at org.apache.carbondata.core.scan.filter.executer.RowLevelFilterExecuterImpl.applyFilter(RowLevelFilterExecuterImpl.java:278)
 ... 22 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)