You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Khurram Faraaz (JIRA)" <ji...@apache.org> on 2017/03/23 05:29:41 UTC
[jira] [Commented] (DRILL-4301) OOM : Unable to allocate sv2 for
1000 records, and not enough batchGroups to spill.
[ https://issues.apache.org/jira/browse/DRILL-4301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937755#comment-15937755 ]
Khurram Faraaz commented on DRILL-4301:
---------------------------------------
Tried this on Drill 1.10.0 commit id: b657d44f
We do not see the OOM, however we do see an Exception while trying to prune partition.
{noformat}
apache drill 1.10.0-SNAPSHOT
"json ain't no thang"
0: jdbc:drill:schema=dfs.tmp> select * from dfs.`/drill/testdata/metadata_caching/fewtypes_boolpartition` where bool_col = true;
+----------+--------------+-------------+-----------+------------------------+----------------+------------------------+------------+----------------+-----------+
| int_col | bigint_col | date_col | time_col | timestamp_col | interval_col | varchar_col | float_col | double_col | bool_col |
+----------+--------------+-------------+-----------+------------------------+----------------+------------------------+------------+----------------+-----------+
| 11 | 980347589 | 2000-04-01 | 00:01:00 | 1997-02-16 17:32:01.0 | P1DT9045.100S | KfijUIwre | 3433.3 | 87364.5 | true |
| 2 | 24509823475 | 1957-06-13 | 01:00:00 | 1997-01-02 00:00:00.0 | P1DT9045S | jhgduitweriuoert | 3243.32 | 664522.332 | true |
| 3 | 3475789234 | 1996-02-28 | 02:03:00 | 2001-09-22 18:19:20.0 | P1DT9000S | gfdstweopiu | 234533.2 | 234522.3221 | true |
| 7 | 7520398475 | 1997-02-28 | 23:59:00 | 1997-02-12 17:32:01.0 | P2Y | ioerutklsdfASDgerGWEr | 211.2 | 7.982763409E8 | true |
| 9 | 509238475 | 1997-03-01 | 15:36:39 | 1997-02-14 17:32:01.0 | P1DT9000S | IOUfiuodsfIUfjkh | 8923.23 | 98273.455 | true |
| 15 | 2983475089 | 2040-04-10 | 12:31:00 | 1957-02-16 17:32:01.0 | P10DT9000S | NvvdfHVG | 5365.643 | 8767985.43 | true |
| 18 | 4582345 | 1999-03-08 | 15:37:39 | 1997-02-16 17:32:01.0 | P1Y2M | uiuikjk | 345.34 | 6.879118943E7 | true |
| 19 | 982 | 1999-01-18 | 15:36:39 | 2097-02-16 17:32:01.0 | P1DT9045.100S | kguitroevn | 645.2321 | 87623.34 | true |
| 21 | 3459807235 | 1999-04-08 | 15:37:39 | 1996-02-29 17:32:01.0 | P18582D | kghglhui | 539878.56 | 7676234.33 | true |
+----------+--------------+-------------+-----------+------------------------+----------------+------------------------+------------+----------------+-----------+
9 rows selected (0.419 seconds)
{noformat}
Details from drillbit.log for above query we see this, Exception while trying to prune partition.
java.lang.UnsupportedOperationException: Unsupported type: BIT
{noformat}
2017-03-23 05:18:55,167 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.drill.exec.work.foreman.Foreman - Query text for query id 272ca53f-8d4a-3286-973d-c8bbd1d35721: select * from dfs.`/drill/testdata/metadata_caching/fewtypes_boolpartition` where bool_col = true
2017-03-23 05:18:55,263 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2017-03-23 05:18:55,263 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2017-03-23 05:18:55,273 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.parquet.Metadata - Took 5 ms to read metadata from cache file
2017-03-23 05:18:55,274 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.parquet.Metadata - No directories were modified. Took 1 ms to check modification time of 1 directories
2017-03-23 05:18:55,274 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2017-03-23 05:18:55,279 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.parquet.Metadata - Took 4 ms to read metadata from cache file
2017-03-23 05:18:55,280 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.parquet.Metadata - No directories were modified. Took 1 ms to check modification time of 1 directories
2017-03-23 05:18:55,280 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, numFiles: 1
2017-03-23 05:18:55,282 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning class: org.apache.drill.exec.planner.logical.partition.PruneScanRule$DirPruneScanFilterOnScanRule
2017-03-23 05:18:55,282 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.e.p.l.partition.PruneScanRule - No partition columns are projected from the scan..continue. Total pruning elapsed time: 0 ms
2017-03-23 05:18:55,283 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.e.p.l.partition.PruneScanRule - Beginning partition pruning, pruning class: org.apache.drill.exec.planner.logical.partition.ParquetPruneScanRule$2
2017-03-23 05:18:55,283 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.e.p.l.partition.PruneScanRule - Total elapsed time to build and analyze filter tree: 0 ms
2017-03-23 05:18:55,284 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] WARN o.a.d.e.p.l.partition.PruneScanRule - Exception while trying to prune partition.
java.lang.UnsupportedOperationException: Unsupported type: BIT
at org.apache.drill.exec.store.parquet.ParquetGroupScan.populatePruningVector(ParquetGroupScan.java:564) ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.ParquetPartitionDescriptor.populatePartitionVectors(ParquetPartitionDescriptor.java:100) ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.logical.partition.PruneScanRule.doOnMatch(PruneScanRule.java:258) ~[drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.logical.partition.ParquetPruneScanRule$2.onMatch(ParquetPruneScanRule.java:98) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:326) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:515) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:392) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:255) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:125) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:207) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:194) [calcite-core-1.4.0-drill-r19.jar:1.4.0-drill-r19]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:393) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:343) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:326) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:241) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:290) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:168) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:131) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:79) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1050) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:281) [drill-java-exec-1.10.0-SNAPSHOT.jar:1.10.0-SNAPSHOT]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
2017-03-23 05:18:55,284 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.e.p.l.partition.PruneScanRule - Total pruning elapsed time: 1 ms
2017-03-23 05:18:55,290 [272ca53f-8d4a-3286-973d-c8bbd1d35721:foreman] INFO o.a.d.e.s.p.ParquetPushDownFilter - Took 0 ms to apply filter on parquet row groups.
2017-03-23 05:18:55,302 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - 272ca53f-8d4a-3286-973d-c8bbd1d35721:0:0: State change requested AWAITING_ALLOCATION --> RUNNING
2017-03-23 05:18:55,302 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.w.f.FragmentStatusReporter - 272ca53f-8d4a-3286-973d-c8bbd1d35721:0:0: State to report: RUNNING
2017-03-23 05:18:55,306 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.c.ClassCompilerSelector - Java compiler policy: DEFAULT, Debug option: true
2017-03-23 05:18:55,321 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.c.ClassCompilerSelector - Java compiler policy: DEFAULT, Debug option: true
2017-03-23 05:18:55,332 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.c.ClassCompilerSelector - Java compiler policy: DEFAULT, Debug option: true
2017-03-23 05:18:55,369 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.w.fragment.FragmentExecutor - 272ca53f-8d4a-3286-973d-c8bbd1d35721:0:0: State change requested RUNNING --> FINISHED
2017-03-23 05:18:55,369 [272ca53f-8d4a-3286-973d-c8bbd1d35721:frag:0:0] INFO o.a.d.e.w.f.FragmentStatusReporter - 272ca53f-8d4a-3286-973d-c8bbd1d35721:0:0: State to report: FINISHED
{noformat}
> OOM : Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.
> -----------------------------------------------------------------------------------
>
> Key: DRILL-4301
> URL: https://issues.apache.org/jira/browse/DRILL-4301
> Project: Apache Drill
> Issue Type: Sub-task
> Components: Execution - Flow
> Affects Versions: 1.5.0
> Environment: 4 node cluster
> Reporter: Khurram Faraaz
> Assignee: Paul Rogers
> Fix For: 1.10.0
>
>
> Query below in Functional tests, fails due to OOM
> {code}
> select * from dfs.`/drill/testdata/metadata_caching/fewtypes_boolpartition` where bool_col = true;
> {code}
> Drill version : drill-1.5.0
> JAVA_VERSION=1.8.0
> {noformat}
> version commit_id commit_message commit_time build_email build_time
> 1.5.0-SNAPSHOT 2f0e3f27e630d5ac15cdaef808564e01708c3c55 DRILL-4190 Don't hold on to batches from left side of merge join. 20.01.2016 @ 22:30:26 UTC Unknown 20.01.2016 @ 23:48:33 UTC
> framework/framework/resources/Functional/metadata_caching/data/bool_partition1.q (connection: 808078113)
> [#1378] Query failed:
> oadd.org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query.
> Unable to allocate sv2 for 1000 records, and not enough batchGroups to spill.
> batchGroups.size 0
> spilledBatchGroups.size 0
> allocated memory 48326272
> allocator limit 46684427
> Fragment 0:0
> [Error Id: 97d58ea3-8aff-48cf-a25e-32363b8e0ecd on drill-demod2:31010]
> at oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
> at oadd.org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
> at oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
> at oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
> at oadd.org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
> at oadd.org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
> at oadd.org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
> at oadd.org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
> at oadd.org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
> at oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
> at oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
> at oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> at oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> at oadd.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> at oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> at oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> at oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> at oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)