You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by "Jacques Nadeau (JIRA)" <ji...@apache.org> on 2015/05/16 04:08:00 UTC

[jira] [Resolved] (DRILL-3110) OutOfMemoryError causes memory accounting leak

     [ https://issues.apache.org/jira/browse/DRILL-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jacques Nadeau resolved DRILL-3110.
-----------------------------------
       Resolution: Fixed
    Fix Version/s: 1.0.0

Fixed in e838abf

> OutOfMemoryError causes memory accounting leak 
> -----------------------------------------------
>
>                 Key: DRILL-3110
>                 URL: https://issues.apache.org/jira/browse/DRILL-3110
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Flow
>    Affects Versions: 1.0.0
>         Environment: > select commit_id from sys.version;
> +------------+
> | commit_id  |
> +------------+
> | 583ca4a95df2c45b5ba20b517cb1aeed48c7548e |
> +------------+
> 1 row selected (0.098 seconds)
>            Reporter: Hao Zhu
>            Assignee: Chris Westin
>             Fix For: 1.0.0
>
>
> Joining two 1G CSV tables resulting in below error:
> {code}
> > select a.* from dfs.root.`user/hive/warehouse/passwords_csv_big` a, dfs.root.`user/hive/warehouse/passwords_csv_big` b
> . . . . . . . . . . . . . . . . . . . . . . .> where a.columns[1]=b.columns[1] limit 5;
> +------------+
> |  columns   |
> +------------+
> | ["1","787148","92921","158596","17776","896094","2"] |
> | ["1","787148","10930","348699","534058","778852","2"] |
> | ["1","787148","10930","348699","534058","778852","2"] |
> | ["1","787148","10930","348699","534058","778852","2"] |
> | ["1","787148","10930","348699","534058","778852","2"] |
> java.lang.RuntimeException: java.sql.SQLException: SYSTEM ERROR: org.apache.drill.exec.rpc.RpcException: Data not accepted downstream.
> Fragment 5:15
> [Error Id: dd25cee9-1d1d-4658-9a83-cdefcafb7031 on h3.poc.com:31010]
>   (org.apache.drill.exec.rpc.RpcException) Data not accepted downstream.
>     org.apache.drill.exec.ops.StatusHandler.success():54
>     org.apache.drill.exec.ops.StatusHandler.success():29
>     org.apache.drill.exec.rpc.ListeningCommand$DeferredRpcOutcome.success():55
>     org.apache.drill.exec.rpc.ListeningCommand$DeferredRpcOutcome.success():46
>     org.apache.drill.exec.rpc.data.DataTunnel$ThrottlingOutcomeListener.success():133
>     org.apache.drill.exec.rpc.data.DataTunnel$ThrottlingOutcomeListener.success():116
>     org.apache.drill.exec.rpc.CoordinationQueue$RpcListener.set():98
>     org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode():243
>     org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode():188
>     io.netty.handler.codec.MessageToMessageDecoder.channelRead():89
>     io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339
>     io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324
>     io.netty.handler.timeout.IdleStateHandler.channelRead():254
>     io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339
>     io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324
>     io.netty.handler.codec.MessageToMessageDecoder.channelRead():103
>     io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339
>     io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324
>     io.netty.handler.codec.ByteToMessageDecoder.channelRead():242
>     io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339
>     io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324
>     io.netty.channel.ChannelInboundHandlerAdapter.channelRead():86
>     io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead():339
>     io.netty.channel.AbstractChannelHandlerContext.fireChannelRead():324
>     io.netty.channel.DefaultChannelPipeline.fireChannelRead():847
>     io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady():618
>     io.netty.channel.epoll.EpollEventLoop.processReady():329
>     io.netty.channel.epoll.EpollEventLoop.run():250
>     io.netty.util.concurrent.SingleThreadEventExecutor$2.run():111
>     java.lang.Thread.run():745
>         at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2514)
>         at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2148)
>         at sqlline.SqlLine.print(SqlLine.java:1809)
>         at sqlline.SqlLine$Commands.execute(SqlLine.java:3766)
>         at sqlline.SqlLine$Commands.sql(SqlLine.java:3663)
>         at sqlline.SqlLine.dispatch(SqlLine.java:889)
>         at sqlline.SqlLine.begin(SqlLine.java:763)
>         at sqlline.SqlLine.start(SqlLine.java:498)
>         at sqlline.SqlLine.main(SqlLine.java:460)
> {code}
> It can be workarounded by changing drill.exec.buffer.size.
> My understanding is "drill.exec.buffer.size" can only change the performance, but it should not cause SQL to fail,right?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)