You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Kunal Khatua (JIRA)" <ji...@apache.org> on 2015/05/21 01:28:01 UTC

[jira] [Closed] (DRILL-1314) Parquet Reader for compressed parquet files fails

     [ https://issues.apache.org/jira/browse/DRILL-1314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kunal Khatua closed DRILL-1314.
-------------------------------

Not reproducible with the latest commits

> Parquet Reader for compressed parquet files fails
> -------------------------------------------------
>
>                 Key: DRILL-1314
>                 URL: https://issues.apache.org/jira/browse/DRILL-1314
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Storage - Parquet
>    Affects Versions: 0.5.0
>         Environment: 10+1 Nodes; RHEL 6.4; 256GB RAM/node; 32 CPU cores/node
>            Reporter: Kunal Khatua
>            Assignee: Jason Altekruse
>            Priority: Blocker
>              Labels: Parquet, Storage
>             Fix For: 0.5.0
>
>         Attachments: 0001-DRILL-1307-add-support-for-fixed-binary-columns-in-p.patch
>
>
> When querying compressed Parquet files, an error is reported for the RLE stream:
> select
> l_orderkey, l_partkey, l_suppkey, l_linenumber, l_quantity, l_extendedprice, l_discount, l_tax, l_returnflag, l_linestatus, l_shipdate, l_commitdate, l_receiptdate, l_shipinstruct, l_shipmode, l_comment
> from
>         lineitem_imp100
> where
>         l_orderkey > 0
>         and l_partkey >= 0
>         and l_suppkey >= 0
>         and l_linenumber >= 0
>         and l_quantity >= 0
>         and l_extendedprice >= 0
>         and l_discount >= 0
>         and l_tax >= 0
>         and length(l_returnflag) > 0
>         and length(l_linestatus) > 0
>         and length(l_shipdate) > 0
>         and length(l_commitdate) > 0
>         and length(l_receiptdate) > 0
>         and length(l_shipinstruct) > 0
>         and length(l_shipmode) > length(l_comment);
> +------------+------------+------------+--------------+------------+-----------------+------------+------------+--------------+--------------+------------+--------------+---------------+----------------+------------+------------+
> | l_orderkey | l_partkey  | l_suppkey  | l_linenumber | l_quantity | l_extendedprice | l_discount |   l_tax    | l_returnflag | l_linestatus | l_shipdate | l_commitdate | l_receiptdate | l_shipinstruct | l_shipmode | l_comment  |
> +------------+------------+------------+--------------+------------+-----------------+------------+------------+--------------+--------------+------------+--------------+---------------+----------------+------------+------------+
> Query failed: Failure while running fragment. Reading past RLE/BitPacking stream. [3a999343-a881-4193-9853-27e24aaf768b]
> java.lang.RuntimeException: java.sql.SQLException: Failure while trying to get next result batch.
>         at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
>         at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
>         at sqlline.SqlLine.print(SqlLine.java:1809)
>         at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
>         at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
>         at sqlline.SqlLine.dispatch(SqlLine.java:889)
>         at sqlline.SqlLine.begin(SqlLine.java:763)
>         at sqlline.SqlLine.start(SqlLine.java:498)
>         at sqlline.SqlLine.main(SqlLine.java:460)
> The JDBC client reports the stack trace:
> java.sql.SQLException: exception while executing query: Failure while trying to get next result batch.
>         at net.hydromatic.avatica.Helper.createException(Helper.java:40)
>         at net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:406)
>         at net.hydromatic.avatica.AvaticaStatement.executeQueryInternal(AvaticaStatement.java:351)
>         at net.hydromatic.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:78)
>         at PipSQueak.executeQuery(PipSQueak.java:243)
>         at PipSQueak.runTest(PipSQueak.java:81)
>         at PipSQueak.main(PipSQueak.java:404)
> Caused by: java.sql.SQLException: Failure while trying to get next result batch.
>         at org.apache.drill.jdbc.DrillCursor.next(DrillCursor.java:110)
>         at org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:90)
>         at org.apache.drill.jdbc.DrillResultSet.execute(DrillResultSet.java:44)
>         at net.hydromatic.avatica.AvaticaConnection.executeQueryInternal(AvaticaConnection.java:404)
>         ... 5 more
> Caused by: org.apache.drill.exec.rpc.RpcException: Failure while running fragment. Reading past RLE/BitPacking stream. [2cc52105-15ac-49ba-a93b-37cf8d47e3c8]
>         at org.apache.drill.exec.rpc.user.QueryResultHandler.batchArrived(QueryResultHandler.java:77)
>         at org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:90)
>         at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:52)
>         at org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:34)
>         at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:60)
>         at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:181)
>         at org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:165)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318)
>         at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318)
>         at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318)
>         at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>         at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332)
>         at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318)
>         at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>         at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464)
>         at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
>         at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
>         at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>         at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)