You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@orc.apache.org by "Dongjoon Hyun (Jira)" <ji...@apache.org> on 2023/05/18 07:02:00 UTC
[jira] [Resolved] (ORC-1413) ORC row level filter throws Exception
[ https://issues.apache.org/jira/browse/ORC-1413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun resolved ORC-1413.
--------------------------------
Fix Version/s: 1.9.0
1.8.4
Resolution: Fixed
> ORC row level filter throws Exception
> -------------------------------------
>
> Key: ORC-1413
> URL: https://issues.apache.org/jira/browse/ORC-1413
> Project: ORC
> Issue Type: Bug
> Reporter: Zoltán Rátkai
> Assignee: Zoltán Rátkai
> Priority: Major
> Fix For: 1.9.0, 1.8.4
>
>
> The following throws exception in Hive with ORC 1.8.3.
> CREATE TABLE IF NOT EXISTS mytableorc183(text String) STORED AS orc TBLPROPERTIES('transactional'='true', 'orc.sarg.to.filter'='true');
> insert into mytableorc183 values("test1");
> select * from mytableorc183 where text='test1';
> select throws this:
>
> ERROR : Failed with exception java.io.IOException:java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector
> java.io.IOException: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector
> at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:624)
> at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:531)
> at org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(FetchTask.java:194)
> at org.apache.hadoop.hive.ql.exec.FetchTask.execute(FetchTask.java:95)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:212)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:154)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
> at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:185)
> at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
> at org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:90)
> at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
> at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:360)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassCastException: org.apache.hadoop.hive.ql.exec.vector.LongColumnVector cannot be cast to org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector
> at org.apache.orc.impl.filter.leaf.StringFilters$StringEquals.allow(StringFilters.java:64)
> at org.apache.orc.impl.filter.LeafFilter.allowWithNegation(LeafFilter.java:80)
> at org.apache.orc.impl.filter.LeafFilter.filter(LeafFilter.java:60)
> at org.apache.orc.impl.filter.BatchFilterFactory$BatchFilterImpl.accept(BatchFilterFactory.java:76)
> at org.apache.orc.impl.filter.BatchFilterFactory$BatchFilterImpl.accept(BatchFilterFactory.java:58)
> at org.apache.orc.impl.reader.tree.StructBatchReader.nextBatch(StructBatchReader.java:88)
> at org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1426)
> at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.ensureBatch(RecordReaderImpl.java:88)
> at org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.hasNext(RecordReaderImpl.java:104)
> at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPairAcid.next(OrcRawRecordMerger.java:292)
> at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPairAcid.<init>(OrcRawRecordMerger.java:260)
> at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.<init>(OrcRawRecordMerger.java:1119)
> at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:2126)
> at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:2021)
> at org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:771)
> at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:335)
> at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:562)
--
This message was sent by Atlassian Jira
(v8.20.10#820010)