You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Khurram Faraaz (JIRA)" <ji...@apache.org> on 2018/04/13 22:02:00 UTC

[jira] [Created] (DRILL-6329) TPC-DS Query 66 failed due to OOM

Khurram Faraaz created DRILL-6329:
-------------------------------------

             Summary: TPC-DS Query 66 failed due to OOM
                 Key: DRILL-6329
                 URL: https://issues.apache.org/jira/browse/DRILL-6329
             Project: Apache Drill
          Issue Type: Bug
          Components: Execution - Flow
    Affects Versions: 1.14.0
            Reporter: Khurram Faraaz
         Attachments: 252f0f20-2774-43d7-ec31-911ee0f5f330.sys.drill, TPCDS_Query_66.sql, TPCDS_Query_66_PLAN.txt

TPC-DS Query 66 failed after 27 minutes on Drill 1.14.0 on a 4 node cluster against SF1 parquet data (dfs.tpcds_sf1_parquet_views). Query 66 and the query profile and the query plan are attached here.

This seems to be a regression, the same query worked fine on 1.10.0

On Drill 1.10.0 ( git.commit id : bbcf4b76) => 9.026 seconds (completed successfully).
On Drill 1.14.0 ( git.commit.id.abbrev=da24113 ) query 66 failed after running for 27 minutes, due to OutOfMemoryException

Stack trace from sqlline console, no stack trace was written to drillbit.log
{noformat}
Error: RESOURCE ERROR: One or more nodes ran out of memory while executing the query.

Too little memory available
Fragment 2:0

[Error Id: 5636a939-a318-4b59-b3e8-9eb93f6b82f3 on qa102-45.qa.lab:31010]

(org.apache.drill.exec.exception.OutOfMemoryException) Too little memory available
 org.apache.drill.exec.test.generated.HashAggregatorGen7120.delayedSetup():409
 org.apache.drill.exec.test.generated.HashAggregatorGen7120.doWork():579
 org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():176
 org.apache.drill.exec.record.AbstractRecordBatch.next():164
 org.apache.drill.exec.record.AbstractRecordBatch.next():119
 org.apache.drill.exec.record.AbstractRecordBatch.next():109
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134
 org.apache.drill.exec.record.AbstractRecordBatch.next():164
 org.apache.drill.exec.physical.impl.BaseRootExec.next():105
 org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():93
 org.apache.drill.exec.physical.impl.BaseRootExec.next():95
 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():292
 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():279
 java.security.AccessController.doPrivileged():-2
 javax.security.auth.Subject.doAs():422
 org.apache.hadoop.security.UserGroupInformation.doAs():1595
 org.apache.drill.exec.work.fragment.FragmentExecutor.run():279
 org.apache.drill.common.SelfCleaningRunnable.run():38
 java.util.concurrent.ThreadPoolExecutor.runWorker():1149
 java.util.concurrent.ThreadPoolExecutor$Worker.run():624
 java.lang.Thread.run():748 (state=,code=0)
java.sql.SQLException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query.

Too little memory available
Fragment 2:0

[Error Id: 5636a939-a318-4b59-b3e8-9eb93f6b82f3 on qa102-45.qa.lab:31010]

(org.apache.drill.exec.exception.OutOfMemoryException) Too little memory available
 org.apache.drill.exec.test.generated.HashAggregatorGen7120.delayedSetup():409
 org.apache.drill.exec.test.generated.HashAggregatorGen7120.doWork():579
 org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():176
 org.apache.drill.exec.record.AbstractRecordBatch.next():164
 org.apache.drill.exec.record.AbstractRecordBatch.next():119
 org.apache.drill.exec.record.AbstractRecordBatch.next():109
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134
 org.apache.drill.exec.record.AbstractRecordBatch.next():164
 org.apache.drill.exec.physical.impl.BaseRootExec.next():105
 org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():93
 org.apache.drill.exec.physical.impl.BaseRootExec.next():95
 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():292
 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():279
 java.security.AccessController.doPrivileged():-2
 javax.security.auth.Subject.doAs():422
 org.apache.hadoop.security.UserGroupInformation.doAs():1595
 org.apache.drill.exec.work.fragment.FragmentExecutor.run():279
 org.apache.drill.common.SelfCleaningRunnable.run():38
 java.util.concurrent.ThreadPoolExecutor.runWorker():1149
 java.util.concurrent.ThreadPoolExecutor$Worker.run():624
 java.lang.Thread.run():748
 
...
Caused by: org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: One or more nodes ran out of memory while executing the query.

Too little memory available
Fragment 2:0

[Error Id: 5636a939-a318-4b59-b3e8-9eb93f6b82f3 on qa102-45.qa.lab:31010]

(org.apache.drill.exec.exception.OutOfMemoryException) Too little memory available
 org.apache.drill.exec.test.generated.HashAggregatorGen7120.delayedSetup():409
 org.apache.drill.exec.test.generated.HashAggregatorGen7120.doWork():579
 org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext():176
 org.apache.drill.exec.record.AbstractRecordBatch.next():164
 org.apache.drill.exec.record.AbstractRecordBatch.next():119
 org.apache.drill.exec.record.AbstractRecordBatch.next():109
 org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
 org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():134
 org.apache.drill.exec.record.AbstractRecordBatch.next():164
 org.apache.drill.exec.physical.impl.BaseRootExec.next():105
 org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext():93
 org.apache.drill.exec.physical.impl.BaseRootExec.next():95
 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():292
 org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():279
 java.security.AccessController.doPrivileged():-2
 javax.security.auth.Subject.doAs():422
 org.apache.hadoop.security.UserGroupInformation.doAs():1595
 org.apache.drill.exec.work.fragment.FragmentExecutor.run():279
 org.apache.drill.common.SelfCleaningRunnable.run():38
 java.util.concurrent.ThreadPoolExecutor.runWorker():1149
 java.util.concurrent.ThreadPoolExecutor$Worker.run():624
 java.lang.Thread.run():748
 
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)