You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@drill.apache.org by Anup Tiwari <an...@games24x7.com> on 2016/05/03 12:19:28 UTC

"java.lang.OutOfMemoryError: Java heap space" error which in-turn kills drill bit of one of the node

Hi All,

Sometimes I am getting below error while creating a table in drill using a
hive table :-

*"*java.lang.OutOfMemoryError: Java heap space*"* which in-turn kills drill
bit of one of the node where i have executed respective query.

*Query Type :-*

create table glv_abc as select sessionid, max(serverTime) as max_serverTime
from hive.hive_logs_daily
where log_date = '2016-05-02'
group by sessionid;


Kindly help me in this.

Please find *output of drillbit.log* below :-

2016-05-03 15:33:15,628 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:12]
ERROR o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurr
ed, exiting. Information message: Unable to handle out of memory condition
in FragmentExecutor.
java.lang.OutOfMemoryError: Java heap space
        at
hive.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:755)
~[drill-hive-exec-shaded-1.6.0.jar:1.6.
0]
        at
hive.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494)
~[drill-hive-exec-shaded-1.6.0.jar:1.6.0]
        at
hive.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
~[drill-hive-exec-shaded-1.6.0.jar:1.6
.0]
        at
hive.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208)
~[drill-hive-exec-shaded-1.6.0.jar:
1.6.0]
        at
hive.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
~[drill-hive-exec-shaded-1.6.0.jar:1.6.0]
        at
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:206)
~[drill-hive-exec-shade
d-1.6.0.jar:1.6.0]
        at
org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:62)
~[drill-hive-exec-shaded
-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.store.hive.HiveRecordReader.next(HiveRecordReader.java:321)
~[drill-storage-hive-core-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:191)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
~[drill-java-exec-1.6.0.jar:1
.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:129)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.test.generated.HashAggregatorGen731.doWork(HashAggTemplate.java:314)
~[na:na]
        at
org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:133)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:129)
~[drill-java-exec-1.6.0.jar:1.6.0]
        at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.6.0.jar:1.6.0]
2016-05-03 15:33:16,648 [Drillbit-ShutdownHook#0] INFO
o.apache.drill.exec.server.Drillbit - Received shutdown request.
2016-05-03 15:33:16,669 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
INFO  o.a.d.e.w.fragment.FragmentExecutor -
28d7890f-a7d6-b55e-3853-23f1ea828751:2:16: State change requested RUNNING
--> FAILED
2016-05-03 15:33:16,670 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
INFO  o.a.d.e.w.fragment.FragmentExecutor -
28d7890f-a7d6-b55e-3853-23f1ea828751:2:16: State change requested FAILED
--> FINISHED
2016-05-03 15:33:16,675 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IOException:
Filesystem closed

Fragment 2:16

[Error Id: 8604418f-ac5e-4e79-b66b-cd7d779b38f7 on namenode:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
IOException: Filesystem closed


Regards,
*Anup*

Re: "java.lang.OutOfMemoryError: Java heap space" error which in-turn kills drill bit of one of the node

Posted by Abhishek Girish <ab...@gmail.com>.
Can you try bumping up Drill Heap memory and restarting Drillbits? Looks
related to DRILL-3678

Refer to http://drill.apache.org/docs/configuring-drill-memory/

On Tue, May 3, 2016 at 3:19 AM, Anup Tiwari <an...@games24x7.com>
wrote:

> Hi All,
>
> Sometimes I am getting below error while creating a table in drill using a
> hive table :-
>
> *"*java.lang.OutOfMemoryError: Java heap space*"* which in-turn kills drill
> bit of one of the node where i have executed respective query.
>
> *Query Type :-*
>
> create table glv_abc as select sessionid, max(serverTime) as max_serverTime
> from hive.hive_logs_daily
> where log_date = '2016-05-02'
> group by sessionid;
>
>
> Kindly help me in this.
>
> Please find *output of drillbit.log* below :-
>
> 2016-05-03 15:33:15,628 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:12]
> ERROR o.a.drill.common.CatastrophicFailure - Catastrophic Failure Occurr
> ed, exiting. Information message: Unable to handle out of memory condition
> in FragmentExecutor.
> java.lang.OutOfMemoryError: Java heap space
>         at
>
> hive.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:755)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6.
> 0]
>         at
>
> hive.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:494)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6.0]
>         at
>
> hive.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6
> .0]
>         at
>
> hive.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:208)
> ~[drill-hive-exec-shaded-1.6.0.jar:
> 1.6.0]
>         at
>
> hive.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:201)
> ~[drill-hive-exec-shaded-1.6.0.jar:1.6.0]
>         at
>
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:206)
> ~[drill-hive-exec-shade
> d-1.6.0.jar:1.6.0]
>         at
>
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:62)
> ~[drill-hive-exec-shaded
> -1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.store.hive.HiveRecordReader.next(HiveRecordReader.java:321)
> ~[drill-storage-hive-core-1.6.0.jar:1.6.0]
>         at
> org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:191)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:94)
> ~[drill-java-exec-1.6.0.jar:1
> .6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:129)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.test.generated.HashAggregatorGen731.doWork(HashAggTemplate.java:314)
> ~[na:na]
>         at
>
> org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:133)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:129)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
>         at
>
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
> ~[drill-java-exec-1.6.0.jar:1.6.0]
> 2016-05-03 15:33:16,648 [Drillbit-ShutdownHook#0] INFO
> o.apache.drill.exec.server.Drillbit - Received shutdown request.
> 2016-05-03 15:33:16,669 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
> INFO  o.a.d.e.w.fragment.FragmentExecutor -
> 28d7890f-a7d6-b55e-3853-23f1ea828751:2:16: State change requested RUNNING
> --> FAILED
> 2016-05-03 15:33:16,670 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
> INFO  o.a.d.e.w.fragment.FragmentExecutor -
> 28d7890f-a7d6-b55e-3853-23f1ea828751:2:16: State change requested FAILED
> --> FINISHED
> 2016-05-03 15:33:16,675 [28d7890f-a7d6-b55e-3853-23f1ea828751:frag:2:16]
> ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: IOException:
> Filesystem closed
>
> Fragment 2:16
>
> [Error Id: 8604418f-ac5e-4e79-b66b-cd7d779b38f7 on namenode:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> IOException: Filesystem closed
>
>
> Regards,
> *Anup*
>