You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "John Omernik (JIRA)" <ji...@apache.org> on 2016/06/08 15:36:21 UTC

[jira] [Commented] (DRILL-4464) Apache Drill cannot read parquet generated outside Drill: Reading past RLE/BitPacking stream

    [ https://issues.apache.org/jira/browse/DRILL-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15320761#comment-15320761 ] 

John Omernik commented on DRILL-4464:
-------------------------------------

I think I may getting this error (although with a slightly different output, perhaps due to different compressions)   Some reasons I think this may be similar:
 
 - Parquet created outside of drill (Older Map Reduce job, running parquet-mr 1.5 - cdh)
- The error I am getting is "Array Index out of bounds" and that could sound similar to "Reading past RLE/BitPacking Stream" (Both indicate reading past a boundary)
- Both issues came on a "ts" field perhaps created in similar way? The metadata (except for Compression) is also similar with INT64 being the type, and the use of the field (ts) being the same)
- The error is different; the error I get on my installation though is different from the error specified in the JIRA. My error with verbose errors off, however, was Array Index out of Bounds... interesting!!!

# My bad file meta data on the bad column
row_created_ts:                REQUIRED INT64 R:0 D:0
row_created_ts:                 INT64 SNAPPY DO:0 FPO:200476955 SZ:1373654/2703428/1.97 VC:532262 ENC:BIT_PACKED,PLAIN,PLAIN_DICTIONARY

# Metadata of bad column in test data
ts:          REQUIRED INT64 R:0 D:0
ts:           INT64 GZIP DO:0 FPO:4 SZ:2630987/19172128/7.29 VC:2418197 ENC:PLAIN_DICTIONARY,PLAIN,BIT_PACKED

# Error on bad data on this JIRA (without verbose errors)
Error: SYSTEM ERROR: NegativeArraySizeException

Fragment 0:0

[Error Id: d90910d4-019a-4141-ac19-204be2058f90 on zeta.local:20001] (state=,code=0)

# Error on bad data (on this JIRA) when I turned verbose errors on
Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 3366328

Fragment 0:0


[Error Id: 8e074772-f85f-4119-815e-83363a8e6c75 on zeta01.local:20001]

  (org.apache.drill.common.exceptions.DrillRuntimeException) Error in parquet record reader.

Message:

Hadoop path: /data/dev/bad_jira/tmp.gz.parquet
Total records read: 131072
Mock records read: 0
Records to read: 32768
Row group index: 0
Records in row group: 2418197

Parquet Metadata: ParquetMetaData{FileMetaData{schema: message nat {
  required int64 ts;
  required int32 dr;
  optional binary ui (UTF8);
  optional int32 up;
  optional binary ri (UTF8);
  optional int32 rp;
  optional binary di (UTF8);
  optional int32 dp;
  required int32 pr;
  optional int64 ob;
  optional int64 ib;

}

, metadata: {}}, blocks: [BlockMetaData{2418197, 30601003 [ColumnMetaData{GZIP [ts] INT64  [BIT_PACKED, PLAIN_DICTIONARY, PLAIN], 4}, ColumnMetaData{GZIP [dr] INT32  [BIT_PACKED, PLAIN_DICTIONARY], 2630991}, ColumnMetaData{GZIP [ui] BINARY  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 2964867}, ColumnMetaData{GZIP [up] INT32  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 2966955}, ColumnMetaData{GZIP [ri] BINARY  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 7481618}, ColumnMetaData{GZIP [rp] INT32  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 7483706}, ColumnMetaData{GZIP [di] BINARY  [RLE, BIT_PACKED, PLAIN], 11995191}, ColumnMetaData{GZIP [dp] INT32  [RLE, BIT_PACKED, PLAIN], 11995247}, ColumnMetaData{GZIP [pr] INT32  [BIT_PACKED, PLAIN_DICTIONARY], 11995303}, ColumnMetaData{GZIP [ob] INT64  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 11995930}, ColumnMetaData{GZIP [ib] INT64  [RLE, BIT_PACKED, PLAIN_DICTIONARY], 11999527}]}]}

    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleAndRaise():352
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():454
    org.apache.drill.exec.physical.impl.ScanBatch.next():191
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.test.generated.StreamingAggregatorGen1991.doWork():173
    org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():167
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():137
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
    org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.physical.impl.BaseRootExec.next():104
    org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
    org.apache.drill.exec.physical.impl.BaseRootExec.next():94
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
    java.security.AccessController.doPrivileged():-2
    javax.security.auth.Subject.doAs():422
    org.apache.hadoop.security.UserGroupInformation.doAs():1595
    org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
    org.apache.drill.common.SelfCleaningRunnable.run():38
    java.util.concurrent.ThreadPoolExecutor.runWorker():1142
    java.util.concurrent.ThreadPoolExecutor$Worker.run():617
    java.lang.Thread.run():745

  Caused By (java.lang.ArrayIndexOutOfBoundsException) 3366328

    org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.decodeToLong():164
    org.apache.parquet.column.values.dictionary.DictionaryValuesReader.readLong():122
    org.apache.drill.exec.store.parquet.columnreaders.ParquetFixedWidthDictionaryReaders$DictionaryBigIntReader.readField():161
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readValues():120
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPageData():169
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.determineSize():146
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPages():107
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFields():393
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():436
    org.apache.drill.exec.physical.impl.ScanBatch.next():191
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.test.generated.StreamingAggregatorGen1991.doWork():173
    org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():167
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext():137
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
    org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.physical.impl.BaseRootExec.next():104
    org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():81
    org.apache.drill.exec.physical.impl.BaseRootExec.next():94
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
    java.security.AccessController.doPrivileged():-2
    javax.security.auth.Subject.doAs():422
    org.apache.hadoop.security.UserGroupInformation.doAs():1595
    org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
    org.apache.drill.common.SelfCleaningRunnable.run():38
    java.util.concurrent.ThreadPoolExecutor.runWorker():1142
    java.util.concurrent.ThreadPoolExecutor$Worker.run():617
    java.lang.Thread.run():745 (state=,code=0)

# Error on my data
Error: SYSTEM ERROR: ArrayIndexOutOfBoundsException: 107014
 
Fragment 1:36
 
[Error Id: ab5b202f-94cc-4275-b136-537dfbea6b31 on zeta.local:20001]
 
  (org.apache.drill.common.exceptions.DrillRuntimeException) Error in parquet record reader.
Message:
Hadoop path: /path/to/files/-m-00001.snappy.parquet
Total records read: 393120
Mock records read: 0
Records to read: 32768
Row group index: 0
Records in row group: 536499
Parquet Metadata: ParquetMetaData{FileMetaData{schema: message events {
…
 
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleAndRaise():352
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():454
    org.apache.drill.exec.physical.impl.ScanBatch.next():191
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
    org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
    org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext():91
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.physical.impl.BaseRootExec.next():104
    org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
    org.apache.drill.exec.physical.impl.BaseRootExec.next():94
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
    java.security.AccessController.doPrivileged():-2
    javax.security.auth.Subject.doAs():422
    org.apache.hadoop.security.UserGroupInformation.doAs():1595
    org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
    org.apache.drill.common.SelfCleaningRunnable.run():38
    java.util.concurrent.ThreadPoolExecutor.runWorker():1142
    java.util.concurrent.ThreadPoolExecutor$Worker.run():617
    java.lang.Thread.run():745
  Caused By (java.lang.ArrayIndexOutOfBoundsException) 107014
    org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary.decodeToLong():164
    org.apache.parquet.column.values.dictionary.DictionaryValuesReader.readLong():122
    org.apache.drill.exec.store.parquet.columnreaders.ParquetFixedWidthDictionaryReaders$DictionaryBigIntReader.readField():161
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readValues():120
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPageData():169
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.determineSize():146
    org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPages():107
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFields():393
    org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next():439
    org.apache.drill.exec.physical.impl.ScanBatch.next():191
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
    org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
    org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.record.AbstractRecordBatch.next():119
    org.apache.drill.exec.record.AbstractRecordBatch.next():109
    org.apache.drill.exec.physical.impl.WriterRecordBatch.innerNext():91
    org.apache.drill.exec.record.AbstractRecordBatch.next():162
    org.apache.drill.exec.physical.impl.BaseRootExec.next():104
    org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():92
    org.apache.drill.exec.physical.impl.BaseRootExec.next():94
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():257
    org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():251
    java.security.AccessController.doPrivileged():-2
    javax.security.auth.Subject.doAs():422
    org.apache.hadoop.security.UserGroupInformation.doAs():1595
    org.apache.drill.exec.work.fragment.FragmentExecutor.run():251
    org.apache.drill.common.SelfCleaningRunnable.run():38
    java.util.concurrent.ThreadPoolExecutor.runWorker():1142
    java.util.concurrent.ThreadPoolExecutor$Worker.run():617
    java.lang.Thread.run():745 (state=,code=0)


> Apache Drill cannot read parquet generated outside Drill: Reading past RLE/BitPacking stream
> --------------------------------------------------------------------------------------------
>
>                 Key: DRILL-4464
>                 URL: https://issues.apache.org/jira/browse/DRILL-4464
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.4.0, 1.5.0, 1.6.0
>            Reporter: Miroslav Holubec
>         Attachments: tmp.gz.parquet
>
>
> When I generate file using MapReduce and parquet 1.8.1 (or 1.8.1-drill-r0), which contains REQUIRED INT64 field, I'm not able to read this column in drill, but I'm able to read full content using parquet-tools cat/dump. This doesn't happened every time, it is input data dependant (so probably different encoding is chosen by parquet for given column?).
> Error reported by drill:
> {noformat}
> 2016-03-02 03:01:16,354 [29296305-abe2-f4bd-ded0-27bb53f631f0:frag:3:0] ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking stream.
> Fragment 3:0
> [Error Id: e2d02152-1b67-4c9f-9cb1-bd2b9ff302d8 on drssc9a4:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: IllegalArgumentException: Reading past RLE/BitPacking stream.
> Fragment 3:0
> [Error Id: e2d02152-1b67-4c9f-9cb1-bd2b9ff302d8 on drssc9a4:31010]
>         at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:534) ~[drill-common-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:321) [drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:184) [drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:290) [drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.4.0.jar:1.4.0]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: Error in parquet record reader.
> Message:
> Hadoop path: /tmp/tmp.gz.parquet
> Total records read: 131070
> Mock records read: 0
> Records to read: 21845
> Row group index: 0
> Records in row group: 2418197
> Parquet Metadata: ParquetMetaData{FileMetaData{schema: message nat {
>   required int64 ts;
>   required int32 dr;
>   optional binary ui (UTF8);
>   optional int32 up;
>   optional binary ri (UTF8);
>   optional int32 rp;
>   optional binary di (UTF8);
>   optional int32 dp;
>   required int32 pr;
>   optional int64 ob;
>   optional int64 ib;
> }
> , metadata: {}}, blocks: [BlockMetaData{2418197, 30601003 [ColumnMetaData{GZIP [ts] INT64  [PLAIN_DICTIONARY, BIT_PACKED, PLAIN], 4}, ColumnMetaData{GZIP [dr] INT32  [PLAIN_DICTIONARY, BIT_PACKED], 2630991}, ColumnMetaData{GZIP [ui] BINARY  [PLAIN_DICTIONARY, RLE, BIT_PACKED], 2964867}, ColumnMetaData{GZIP [up] INT32  [PLAIN_DICTIONARY, RLE, BIT_PACKED], 2966955}, ColumnMetaData{GZIP [ri] BINARY  [PLAIN_DICTIONARY, RLE, BIT_PACKED], 7481618}, ColumnMetaData{GZIP [rp] INT32  [PLAIN_DICTIONARY, RLE, BIT_PACKED], 7483706}, ColumnMetaData{GZIP [di] BINARY  [RLE, BIT_PACKED, PLAIN], 11995191}, ColumnMetaData{GZIP [dp] INT32  [RLE, BIT_PACKED, PLAIN], 11995247}, ColumnMetaData{GZIP [pr] INT32  [PLAIN_DICTIONARY, BIT_PACKED], 11995303}, ColumnMetaData{GZIP [ob] INT64  [PLAIN_DICTIONARY, RLE, BIT_PACKED], 11995930}, ColumnMetaData{GZIP [ib] INT64  [PLAIN_DICTIONARY, RLE, BIT_PACKED], 11999527}]}]}
>         at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.handleAndRaise(ParquetRecordReader.java:345) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:447) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:191) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:132) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:93) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:256) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:250) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_40]
>         at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_40]
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) ~[hadoop-common-2.7.1.jar:na]
>         at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:250) [drill-java-exec-1.4.0.jar:1.4.0]
>         ... 4 common frames omitted
> Caused by: java.lang.IllegalArgumentException: Reading past RLE/BitPacking stream.
>         at org.apache.parquet.Preconditions.checkArgument(Preconditions.java:55) ~[parquet-common-1.8.1-drill-r0.jar:1.8.1-drill-r0]
>         at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readNext(RunLengthBitPackingHybridDecoder.java:84) ~[parquet-column-1.8.1-drill-r0.jar:1.8.1-drill-r0]
>         at org.apache.parquet.column.values.rle.RunLengthBitPackingHybridDecoder.readInt(RunLengthBitPackingHybridDecoder.java:66) ~[parquet-column-1.8.1-drill-r0.jar:1.8.1-drill-r0]
>         at org.apache.parquet.column.values.dictionary.DictionaryValuesReader.readLong(DictionaryValuesReader.java:122) ~[parquet-column-1.8.1-drill-r0.jar:1.8.1-drill-r0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ParquetFixedWidthDictionaryReaders$DictionaryBigIntReader.readField(ParquetFixedWidthDictionaryReaders.java:182) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readValues(ColumnReader.java:120) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPageData(ColumnReader.java:169) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.determineSize(ColumnReader.java:146) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPages(ColumnReader.java:107) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFields(ParquetRecordReader.java:386) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:429) ~[drill-java-exec-1.4.0.jar:1.4.0]
>         ... 19 common frames omitted
> {noformat}
> When I change fields in schema to optional and regenerate file, drill will start working. Same when I generate file using CTAS (which have all columns optional as well).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)