You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by "Khurram Faraaz (JIRA)" <ji...@apache.org> on 2017/04/12 05:38:41 UTC

[jira] [Created] (DRILL-5430) select on fastparquet created parquet file results in IOB Exception

Khurram Faraaz created DRILL-5430:
-------------------------------------

             Summary: select on fastparquet created parquet file results in IOB Exception
                 Key: DRILL-5430
                 URL: https://issues.apache.org/jira/browse/DRILL-5430
             Project: Apache Drill
          Issue Type: Bug
          Components: Storage - Parquet
    Affects Versions: 1.11.0
            Reporter: Khurram Faraaz


Select on a parquet file created using python's fastparquet results in IOB Exception.
Parquet file was created using fastparquet (python implementation of parquet format)

Parquet schema information for parquet file used in test.
{noformat}
[root@centos-12q parquet-tools]# ./parquet-schema frmPandas_9.parquet
message schema {
  required int32 c_int8_signed (INT_8);
  required int32 c_uint8 (UINT_8);
}
{noformat}

Apache Drill 1.11.0
git.commit.id.abbrev=06e1522

Select on the parquet results in DATA_READ error, stack trace shows an IndexOutOfBoundsException
{noformat}
0: jdbc:drill:schema=dfs.tmp> select * from `frmPandas_9.parquet`;
Error: DATA_READ ERROR: Error reading from Parquet file

File:  /tmp/frmPandas_9.parquet
Column:  c_int8_signed
Row Group Start:  4
Fragment 0:0

[Error Id: 32b4d0ec-4198-4b38-bdd7-a27c881072e6 on centos-01.qa.lab:31010] (state=,code=0)
{noformat}

Stack trace from drillbit.log

{noformat}
2017-04-12 04:23:55,214 [27125424-657c-7f8f-c27b-4d1516e4bb97:frag:0:0] INFO  o.a.d.e.s.p.c.ColumnReader - User Error Occurred: Error reading from Parquet file (srcIndex: 0)
org.apache.drill.common.exceptions.UserException: DATA_READ ERROR: Error reading from Parquet file

File:  /tmp/frmPandas_9.parquet
Column:  c_int8_signed
Row Group Start:  4

[Error Id: 32b4d0ec-4198-4b38-bdd7-a27c881072e6 ]
        at org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544) ~[drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readValues(ColumnReader.java:151) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPageData(ColumnReader.java:199) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.determineSize(ColumnReader.java:179) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.processPages(ColumnReader.java:129) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFieldsSerial(ParquetRecordReader.java:512) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.readAllFixedFields(ParquetRecordReader.java:505) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.next(ParquetRecordReader.java:590) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:179) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext(ScreenCreator.java:81) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at java.security.AccessController.doPrivileged(Native Method) [na:1.8.0_91]
        at javax.security.auth.Subject.doAs(Subject.java:422) [na:1.8.0_91]
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595) [hadoop-common-2.7.0-mapr-1607.jar:na]
        at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38) [drill-common-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
Caused by: java.lang.IndexOutOfBoundsException: srcIndex: 0
        at io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:255) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
        at io.netty.buffer.WrappedByteBuf.setBytes(WrappedByteBuf.java:378) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
        at io.netty.buffer.UnsafeDirectLittleEndian.setBytes(UnsafeDirectLittleEndian.java:30) ~[drill-memory-base-1.11.0-SNAPSHOT.jar:4.0.27.Final]
        at io.netty.buffer.DrillBuf.setBytes(DrillBuf.java:728) ~[drill-memory-base-1.11.0-SNAPSHOT.jar:4.0.27.Final]
        at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:850) ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
        at org.apache.drill.exec.store.parquet.columnreaders.FixedByteAlignedReader.writeData(FixedByteAlignedReader.java:67) ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.FixedByteAlignedReader.readField(FixedByteAlignedReader.java:63) ~[drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.readValues(ColumnReader.java:145) [drill-java-exec-1.11.0-SNAPSHOT.jar:1.11.0-SNAPSHOT]
        ... 25 common frames omitted
{noformat}

Python script that creates the parquet file.

{noformat}
import fastparquet
import pandas as pd
import numpy as np

N=1000

columns_data=['c_int8_signed', 'c_uint8']

df = pd.DataFrame({'c_uint8':np.random.randint(0, 255, size=N) ,'c_int8_signed':np.random.randint(-128, 127, size=N)},columns=columns_data)
df = df.astype(dtype={"c_int8_signed":"int8", "c_uint8":"uint8"})

fastparquet.write('frmPandas_9.parquet', df, compression='GZIP')
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)