You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@drill.apache.org by Anup Tiwari <an...@games24x7.com> on 2018/03/13 06:32:31 UTC

Re: [1.9.0] : UserException: SYSTEM ERROR: IllegalReferenceCountException: refCnt: 0 and then SYSTEM ERROR: IOException: Failed to shutdown streamer

Hi All,
We are getting "IllegalReferenceCountException" issue again in for few queries
from last 2 days and currently we are on Drill 1.12.0. Can anybody help me here
to understand what is the exact reason behind this?  





On Thu, Dec 14, 2017 4:52 PM, Anup Tiwari anup.tiwari@games24x7.com  wrote:
Hi Kunal,

Please find below answers to your question :-

1. Setup description :- 
Number of Nodes : 5
RAM/Node : 32GB
Core/Node : 8
DRILL_MAX_DIRECT_MEMORY="20G"
DRILL_HEAP="16G"

2. What queries were you running and against what kind of dataset :-  Same type
of queries as mentioned in trail mail and 
dataset :- Drill Tables created from Hive Parquet Table which is created from
Json Log Files.

3. How frequently is it occurring :- 2-3 times in a month.


Please find below Drill Logs  :-

[Error Id: e4cf470d-5aa8-4b9a-b8dd-d6201996cabe on host1:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
IllegalReferenceCountException: refCnt: 0

Fragment 3:13

[Error Id: e4cf470d-5aa8-4b9a-b8dd-d6201996cabe on host1:31010]
at
org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:544)
~[drill-common-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.sendFinalState(FragmentExecutor.java:293)
[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:160)
[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:262)
[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.common.SelfCleaningRunnable.run(SelfCleaningRunnable.java:38)
[drill-common-1.10.0.jar:1.10.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_72]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_72]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
Caused by: io.netty.util.IllegalReferenceCountException: refCnt: 0
at io.netty.buffer.AbstractByteBuf.ensureAccessible(AbstractByteBuf.java:1178)
~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:115)
~[drill-memory-base-1.10.0.jar:4.0.27.Final]
at io.netty.buffer.DrillBuf.checkBytes(DrillBuf.java:141)
~[drill-memory-base-1.10.0.jar:4.0.27.Final]
at
org.apache.drill.exec.expr.fn.impl.ByteFunctionHelpers.compare(ByteFunctionHelpers.java:99)
~[vector-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.test.generated.ProjectorGen3570.doEval(ProjectorTemplate.java:187)
~[na:na]
at
org.apache.drill.exec.test.generated.ProjectorGen3570.projectRecords(ProjectorTemplate.java:67)
~[na:na]
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.doWork(ProjectRecordBatch.java:199)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:93)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.test.generated.HashAggregatorGen120.doWork(HashAggTemplate.java:312)
~[na:na]
at
org.apache.drill.exec.physical.impl.aggregate.HashAggBatch.innerNext(HashAggBatch.java:143)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.10.0.jar:1.10.0]
at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
~[drill-java-exec-1.10.0.jar:1.10.0]
at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
~[drill-java-exec-1.10.0.jar:1.10.0]
at
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
~[drill-java-exec-1.10.0.jar:1.10.0]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.8.0_72]
at javax.security.auth.Subject.doAs(Subject.java:422) ~[na:1.8.0_72]
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
~[hadoop-common-2.7.1.jar:na]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
[drill-java-exec-1.10.0.jar:1.10.0]
... 4 common frames omitted

Regards,
Anup Tiwari

On Tue, Dec 12, 2017 at 11:46 AM, Kunal Khatua <kk...@mapr.com>  wrote:
Sorry, I meant that Drill shutdown a *query* prematurely. When a query
completes, all the related threads (fragments) need to perform a clean up and
give resources back to the pool.

This ideally should not have the need to be handled by the application. So, what
would be good to know is

1. Setup description
2. What queries were you running and against what kind of dataset
3. How frequently is it occurring.

The Drill logs also tend to have a stack trace for such errors, so it helps if
you can share that too.

~Kunal

-----Original Message-----
From: Anup Tiwari [mailto:anup.tiwari@games24x7.com]
Sent: Friday, December 08, 2017 12:35 AM
To: user@drill.apache.org
Subject: Re: [1.9.0] : UserException: SYSTEM ERROR:
IllegalReferenceCountException: refCnt: 0 and then SYSTEM ERROR: IOException:
Failed to shutdown streamer

Hi Kunal,

I was executing a similar query shared in trail mail also as you have mentioned
:- *This is a system error and the message appears to hint that Drill shutdown a
prematurely , *I have checked on all nodes and drill-bit is running properly.

Note :- We are using Drill 1.10.0.

Regards,
*Anup Tiwari*

On Thu, Dec 7, 2017 at 10:33 PM, Kunal Khatua <kk...@mapr.com> wrote:

> What is it that you were trying to do when you encountered this?
>
> This is a system error and the message appears to hint that Drill
> shutdown a prematurely and is unable to account for that
>
> Kunal
>
>
> From: Anup Tiwari
> Sent: Wednesday, December 6, 7:46 PM
> Subject: Re: [1.9.0] : UserException: SYSTEM ERROR:
> IllegalReferenceCountException: refCnt: 0 and then SYSTEM ERROR:
> IOException: Failed to shutdown streamer
> To: user@drill.apache.org
>
>
> Hi All, As asked in trail mail can someone explain how to handle :-
> *UserException: SYSTEM ERROR: IllegalReferenceCountException* *: refCnt:
> 0?* As above error doesn't explain what and where the real problem is?
> But if we execute same query in hive for which we get above error then
> it works. Regards, *Anup Tiwari* On Mon, Dec 12, 2016 at 5:07 PM, Anup
> Tiwari
> wrote: > Hi Aman, > > Sorry for delayed response, since we are
> executing this query on our > ~150GB logs and as i have mentioned in
> trail mail, by executing "removed > conditions alone" CTAS got
> executed successfully, so i don't know which > sample data i should
share(since i don't know pattern)?
> > > Can you tell me in which scenarios we throw " >
> IllegalReferenceCountException" and how to handle this in different >
> scenarios? > > Regards, > *Anup Tiwari* > > On Thu, Dec 8, 2016 at
> 10:55 PM, Aman Sinha wrote: > >> Hi Anup, >> since your original query
> was working on 1.6 and failed in 1.9, could you >> pls file a JIRA for this ?
> It sounds like a regression related to >> evaluation of a Project
> expression (based on the stack trace). Since >> there >> are several
> CASE exprs, quite likely something related to its evaluation. >> It
> would be great if you can provide some sample data for someone to >>
> debug. >> Thanks. >> >> On Thu, Dec 8, 2016 at 12:50 AM, Anup Tiwari
> >> wrote: >> >>
> > Hi, >> > >> > I have removed few conditions from my query then it
> > just
> worked fine. >> > >> > Also can someone tell me in which scenarios we
> throw " >> > *IllegalReferenceCountException*" and how to handle it in
> different >> > scenarios ? >> > >> > As i got this in another query
> and by removing some conditions it worked >> > for me but when i
> execute that removed conditions alone in CTAS , it got >> > executed
> successfully. >> >
> >> > Regards, >> > *Anup Tiwari* >> > >> > On Wed, Dec 7, 2016 at
> >> > 12:22 PM,
> Anup Tiwari > > >> > wrote: >> > >> > > Hi Team, >> > > >> > > I am
> getting below 2 error in my one of the query which was working >> fine
> >> > > on 1.6, Please help me out in this:- >> > > >> > > 1.
> UserException: SYSTEM
> ERROR: IllegalReferenceCountException: >> refCnt: >> > 0 >> > > 2.
> SYSTEM
> ERROR: IOException: Failed to shutdown streamer >> > > >> > > Please
> find below query and its stack trace :- >> > > >> > > *Query :-* >> >
> > >> > > create table a_tt3_reg_login as >> > > select sessionid, >> >
> > >> > > count(distinct (case when ((( event = 'e.a' and ajaxUrl like
> >> > > '%/ab/pL%t=r%' ) or ( (Base64Conv(Response) like '%st%tr%' and
> >> > >
> Base64Conv(Response) not like '%error%') and ajaxUrl like >> '%/sign/ter%'
> >> > )) >> > > OR ( event = 'e.a' and ajaxUrl like '%/player/ter/ter.htm%'
> and >> > > Base64Conv(Response) like '%st%tr%ter%tr%') OR (id =
> '/ter/thyou.htm' >> > and >> > > url = '/pla/natlob.htm')) then
> sessionid
> end) ) as regs, >> > > >> > > count(distinct (case when ( ajaxUrl like
> '%/signup/poLo%t=log%' and >> event >> > > = 'e.a' ) or ( event =
> 'e.a' and ajaxUrl like >> '%j_spring_security_check%' >> > > and
> Base64Conv(Response) like '%st%tr%') then sessionid end) ) as >> login
> >> > , >> > > >> > > count(distinct (case when ((ajaxUrl like >>
> '/pl%/loadResponsePage.htm%fD=
> >> > true&sta=yes%' >> > > or ajaxUrl like
> >> > '/pl%/loadResponsePage.htm%fD=true&sta=YES%')
> OR >> > (ajaxUrl >> > > like 'loadSuccessPage.do%fD=true&sta=yes%' or
> ajaxUrl like >> > > 'loadSuccessPage.do%fD=true&sta=YES%')) then
> sessionid end) ) as fd >> , >> > > >> > > count(distinct (case when
> ((ajaxUrl like >> '/pl%/loadResponsePage.htm%fD= >> > false&sta=yes%'
> >> >
> > or ajaxUrl like '/pl%/loadResponsePage.htm%fD=false&sta=YES%') OR >>
> > > (ajaxUrl like 'loadSuccessPage.do%fD=false&sta=yes%' or ajaxUrl
> > like >>
> > > 'loadSuccessPage.do%fD=false&sta=YES%')) then sessionid end) ) as
> > > rd
> >> > > >> > > from >> > > tt2 >> > > group by sessionid; >> > > Error:
> SYSTEM ERROR: IllegalReferenceCountException: refCnt: 0 >> > > >> > >
> Fragment 14:19 >> > > >> > > [Error Id:
> e4659753-f8d0-403c-9eec-0ff6f2e30dd9
> on namenode:31010] >> > > (state=,code=0) >> > > >> > > >> > > *Stack
> Trace From Drillbit.log:-* >> > > >> > > [Error Id:
> e4659753-f8d0-403c-9eec-0ff6f2e30dd9
> on namenode:31010] >> > > org.apache.drill.common.exceptions.UserException:
> SYSTEM ERROR: >> > > IllegalReferenceCountException: refCnt: 0 >> > >
> >>
> > > Fragment 14:19 >> > > >> > > [Error Id:
> > > e4659753-f8d0-403c-9eec-0ff6f2e30dd9
> on namenode:31010] >> > > at
> org.apache.drill.common.exceptions.UserException$
> >> > > Builder.build(UserException.java:543)
> >> > > ~[drill-common-1.9.0.jar:1.9.0] at
> >> > > org.apache.drill.exec.work.fragment.FragmentExecutor. >> > >
> sendFinalState(FragmentExecutor.java:293) >>
> [drill-java-exec-1.9.0.jar:1. >> > > 9.0] >> > > at
> org.apache.drill.exec.work.fragment.FragmentExecutor. >> > >
> cleanup(FragmentExecutor.java:160) [drill-java-exec-1.9.0.jar:1.9.0]
> >> >
> > at org.apache.drill.exec.work.fragment.FragmentExecutor.run( >> >
> FragmentExecutor.java:262) >> > > [drill-java-exec-1.9.0.jar:1.9.0] >>
> >
> > at org.apache.drill.common.SelfCleaningRunnable.run( >> >
> SelfCleaningRunnable.java:38) >> > > [drill-common-1.9.0.jar:1.9.0] >>
> > > at java.util.concurrent.ThreadPoolExecutor.runWorker( >> >
> ThreadPoolExecutor.java:1142) >> > > [na:1.8.0_74] >> > > at
> java.util.concurrent.ThreadPoolExecutor$Worker.run( >> >
> ThreadPoolExecutor.java:617) >> > > [na:1.8.0_74] >> > > at
> java.lang.Thread.run(Thread.java:745) [na:1.8.0_74] >> > > Caused by:
> io.netty.util.IllegalReferenceCountException: refCnt: 0 >> > > at
> io.netty.buffer.AbstractByteBuf.ensureAccessible( >> >
> AbstractByteBuf.java:1178) >> > >
> ~[netty-buffer-4.0.27.Final.jar:4.0.27.Final]
> >> > > at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:115) >> >
> >> > > >
> ~[drill-memory-base-1.9.0.jar:4.0.27.Final] >> > > at
> io.netty.buffer.DrillBuf.chk(DrillBuf.java:147) >> > >
> ~[drill-memory-base-1.9.0.jar:4.0.27.Final] >> > > at
> io.netty.buffer.DrillBuf.getByte(DrillBuf.java:775) >> > >
> ~[drill-memory-base-1.9.0.jar:4.0.27.Final] >> > > at
> org.apache.drill.exec.expr.fn.impl.CharSequenceWrapper. >> > >
> isAscii(CharSequenceWrapper.java:143) ~[drill-java-exec-1.9.0.jar:1.
> >> 9.0] >> > > at
> org.apache.drill.exec.expr.fn.impl.CharSequenceWrapper. >>
> > > setBuffer(CharSequenceWrapper.java:106) >>
> ~[drill-java-exec-1.9.0.jar:1. >> > 9.0] >> > > at
> org.apache.drill.exec.test.generated.ProjectorGen980. >> > >
> doEval(ProjectorTemplate.java:776) ~[na:na] >> > > at
> org.apache.drill.exec.test.generated.ProjectorGen980. >> > >
> projectRecords(ProjectorTemplate.java:62) ~[na:na] >> > > at
> org.apache.drill.exec.physical.impl.project. >> > >
> ProjectRecordBatch.doWork(ProjectRecordBatch.java:199) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at org.apache.drill.exec.record.
AbstractSingleRecordBatch.
> >> > > innerNext(AbstractSingleRecordBatch.java:93) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at org.apache.drill.exec.
physical.impl.project.
> >> > > ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) >> >
> >> > > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at
> org.apache.drill.exec.record.AbstractRecordBatch.next(
> >> > AbstractRecordBatch.java:162) >> > >
> >> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.record.AbstractRecordBatch.next( >> >
> AbstractRecordBatch.java:119) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.test.generated.HashAggregatorGen33. >>
> >> > > >
> > doWork(HashAggTemplate.java:313) ~[na:na] >> > > at
> org.apache.drill.exec.physical.impl.aggregate. >> > >
> HashAggBatch.innerNext(HashAggBatch.java:144) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at
> org.apache.drill.exec.record.AbstractRecordBatch.next(
> >> > AbstractRecordBatch.java:162) >> > >
> >> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.record.AbstractRecordBatch.next( >> >
> AbstractRecordBatch.java:119) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.record.AbstractRecordBatch.next( >> >
> AbstractRecordBatch.java:109) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.record.AbstractSingleRecordBatch. >> >
> >> > > >
> innerNext(AbstractSingleRecordBatch.java:51) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at org.apache.drill.exec.
physical.impl.project.
> >> > > ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135) >> >
> >> > > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at
> org.apache.drill.exec.record.AbstractRecordBatch.next(
> >> > AbstractRecordBatch.java:162) >> > >
> >> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.physical.impl.BaseRootExec. >> >
> next(BaseRootExec.java:104) >> > > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >>
> > > at org.apache.drill.exec.physical.impl.SingleSenderCreator$ >> > >
> SingleSenderRootExec.innerNext(SingleSenderCreator.java:92) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at org.apache.drill.exec.
physical.impl.BaseRootExec.
> >> > next(BaseRootExec.java:94) >> > >
> >> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.work.fragment.FragmentExecutor$1. >> >
> run(FragmentExecutor.java:232) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.work.fragment.FragmentExecutor$1. >> >
> run(FragmentExecutor.java:226) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at java.security.AccessController.doPrivileged(Native Method)
> >> > > >> >
> > ~[na:1.8.0_74] >> > > at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> >> > > ~[na:1.8.0_74] >> > > at
> >> > > org.apache.hadoop.security.UserGroupInformation.doAs(
> >> > > UserGroupInformation.java:1657) ~[hadoop-common-2.7.1.jar:na]
> >> > > >> >
> > at org.apache.drill.exec.work.fragment.FragmentExecutor.run( >> >
> FragmentExecutor.java:226) >> > > [drill-java-exec-1.9.0.jar:1.9.0] >>
> >
> > ... 4 common frames omitted >> > > 2016-12-07 11:47:54,616
> [CONTROL-rpc-event-queue] INFO >> >
> o.a.d.e.w.fragment.FragmentExecutor
> >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:2:1: State change
> >> > > requested RUNNING --> CANCELLATION_REQUESTED >> > > 2016-12-07
> >> > > 11:47:54,616
> [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >> >
> FragmentStatusReporter >>
> > > - 27b85671-2a57-7d5f-18a5-680566b07067:2:1: State to report: >> >
> > > >
> CANCELLATION_REQUESTED >> > > 2016-12-07 11:47:54,617
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:2:1] >> > > INFO
> o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5- >> >
> 680566b07067:2:1: >> > > State change requested CANCELLATION_REQUESTED
> --> FINISHED >> > > 2016-12-07 11:47:54,617
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:2:1] >> > > INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:2:1: >> > > State to report:
> CANCELLED >> > > 2016-12-07 11:47:54,617 [CONTROL-rpc-event-queue]
> INFO >>
> > o.a.d.e.w.fragment.FragmentExecutor >> > > - 27b85671-2a57-7d5f-18a5-
680566b07067:7:1:
> State change requested >> > > RUNNING --> CANCELLATION_REQUESTED >> >
> >
> 2016-12-07 11:47:54,663 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >>
> > FragmentStatusReporter >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:1:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,664 [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:1] >> >
> > INFO o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> >> >
> 680566b07067:7:1: >> > > State change requested CANCELLATION_REQUESTED
> --> FINISHED >> > > 2016-12-07 11:47:54,722
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:1] >> > > INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:7:1: >> > > State to report:
> CANCELLED >> > > 2016-12-07 11:47:54,675 [CONTROL-rpc-event-queue]
> INFO >>
> > o.a.d.e.w.fragment.FragmentExecutor >> > > - 27b85671-2a57-7d5f-18a5-
680566b07067:7:5:
> State change requested >> > > RUNNING --> CANCELLATION_REQUESTED >> >
> >
> 2016-12-07 11:47:54,727 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >>
> > FragmentStatusReporter >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:5:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,727 [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:5] >> >
> > INFO o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> >> >
> 680566b07067:7:5: >> > > State change requested CANCELLATION_REQUESTED
> --> FINISHED >> > > 2016-12-07 11:47:54,727
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:5] >> > > INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:7:5: >> > > State to report:
> CANCELLED >> > > 2016-12-07 11:47:54,733 [CONTROL-rpc-event-queue]
> INFO >>
> > o.a.d.e.w.fragment.FragmentExecutor >> > > - 27b85671-2a57-7d5f-18a5-
680566b07067:7:9:
> State change requested >> > > RUNNING --> CANCELLATION_REQUESTED >> >
> >
> 2016-12-07 11:47:54,733 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >>
> > FragmentStatusReporter >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:9:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,733 [drill-executor-652] WARN >> >
> o.a.d.exec.rpc.control.WorkEventBus
> >> > > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:2:1 not found
> >> > > in
> the >> > work >> > > bus. >> > > 2016-12-07 11:47:54,734
> [drill-executor-624] WARN >> > o.a.d.exec.rpc.control.WorkEventBus >>
> > >
> - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:5 not found in the
> >> > work >> > > bus. >> > > 2016-12-07 11:47:54,734
> [drill-executor-621] WARN
> >> > o.a.d.exec.rpc.control.WorkEventBus >> > > - Fragment
> 27b85671-2a57-7d5f-18a5-680566b07067:7:1 not found in the >> > work >>
> >
> > bus. >> > > 2016-12-07 11:47:54,780 [27b85671-2a57-7d5f-18a5-68056
> > >>
> 6b07067:frag:7:9] >> > > INFO o.a.d.e.w.fragment.FragmentExecutor -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:7:9: >> > > State change
> requested CANCELLATION_REQUESTED --> FINISHED >> > > 2016-12-07
> 11:47:54,780 [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:9] >> >
> > INFO o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> >> >
> 680566b07067:7:9: >> > > State to report: CANCELLED >> > > 2016-12-07
> 11:47:54,781 [drill-executor-625] WARN >> >
> o.a.d.exec.rpc.control.WorkEventBus
> >> > > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:9 not found
> >> > > in
> the >> > work >> > > bus. >> > > 2016-12-07 11:47:54,796
> [CONTROL-rpc-event-queue] INFO >> >
> o.a.d.e.w.fragment.FragmentExecutor
> >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:13: State change
> requested >> > > RUNNING --> CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,796 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >> >
> FragmentStatusReporter >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:13:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,796 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >> >
> FragmentStatusReporter >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:13:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,797 [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:13] >> >
> > INFO o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> >> >
> 680566b07067:7:13: >> > > State change requested
> CANCELLATION_REQUESTED --> FINISHED >> > > 2016-12-07 11:47:54,797
> [CONTROL-rpc-event-queue] INFO >> > o.a.d.e.w.fragment.FragmentExecutor >> > >
- 27b85671-2a57-7d5f-18a5-680566b07067:7:17:
> State change requested >> > > RUNNING --> CANCELLATION_REQUESTED >> >
> >
> 2016-12-07 11:47:54,847 [27b85671-2a57-7d5f-18a5-68056 >>
> 6b07067:frag:7:13] >> > > INFO o.a.d.e.w.f.FragmentStatusReporter -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:7:13: >> > > State to report:
> CANCELLED >> > > 2016-12-07 11:47:54,847 [CONTROL-rpc-event-queue]
> INFO o.a.d.e.w.f. >> > FragmentStatusReporter >> > > -
27b85671-2a57-7d5f-18a5-680566b07067:7:17:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,847 [drill-executor-626] WARN >> >
> o.a.d.exec.rpc.control.WorkEventBus
> >> > > - Fragment 27b85671-2a57-7d5f-18a5-680566b07067:7:13 not found
> >> > > in
> the >> > > work bus. >> > > 2016-12-07 11:47:54,855
> [CONTROL-rpc-event-queue] INFO >> >
> o.a.d.e.w.fragment.FragmentExecutor
> >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:21: State change
> requested >> > > RUNNING --> CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,855 [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >> >
> FragmentStatusReporter >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:7:21:
> State to report: >> > > CANCELLATION_REQUESTED >> > > 2016-12-07
> 11:47:54,855 [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:17] >> >
> > INFO o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5-
> >> >
> 680566b07067:7:17: >> > > State change requested
> CANCELLATION_REQUESTED --> FINISHED >> > > 2016-12-07 11:47:54,855
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:17] >> > > INFO
> o.a.d.e.w.f.FragmentStatusReporter -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:7:17: >> > > State to report:
> CANCELLED >> > > 2016-12-07 11:47:54,855 [drill-executor-628] WARN >>
> > o.a.d.exec.rpc.control.WorkEventBus >> > > - Fragment
> 27b85671-2a57-7d5f-18a5-680566b07067:7:17 not found in the >> > > work
> bus. >> > > 2016-12-07 11:47:54,855 [27b85671-2a57-7d5f-18a5-68056 >>
> 6b07067:frag:7:21] >> > > INFO o.a.d.e.w.fragment.FragmentExecutor -
> 27b85671-2a57-7d5f-18a5- >> > 680566b07067:7:21: >> > > State change
> requested CANCELLATION_REQUESTED --> FINISHED >> > > 2016-12-07
> 11:47:54,855 [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:7:21] >> >
> > INFO o.a.d.e.w.f.FragmentStatusReporter - 27b85671-2a57-7d5f-18a5-
> >> >
> 680566b07067:7:21: >> > > State to report: CANCELLED >> > > 2016-12-07
> 11:47:54,856 [CONTROL-rpc-event-queue] INFO >> >
> o.a.d.e.w.fragment.FragmentExecutor
> >> > > - 27b85671-2a57-7d5f-18a5-680566b07067:8:1: State change
> >> > > requested RUNNING --> CANCELLATION_REQUESTED >> > > 2016-12-07
> >> > > 11:47:54,856
> [CONTROL-rpc-event-queue] INFO o.a.d.e.w.f. >> >
> FragmentStatusReporter >>
> > > - 27b85671-2a57-7d5f-18a5-680566b07067:8:1: State to report: >> >
> > > >
> CANCELLATION_REQUESTED >> > > 2016-12-07 11:47:54,857
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:8:1] >> > > INFO
> o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5- >> >
> 680566b07067:8:1: >> > > State change requested CANCELLATION_REQUESTED
> --> FINISHED >> > > .... >> > > >> > > 2016-12-07 11:47:55,172
> [27b85671-2a57-7d5f-18a5-68056 >> 6b07067:frag:1:1] >> > > INFO
> o.a.d.e.w.fragment.FragmentExecutor - 27b85671-2a57-7d5f-18a5- >> >
> 680566b07067:1:1: >> > > State change requested FAILED --> FINISHED >>
> > >
> 2016-12-07 11:47:55,174 [27b85671-2a57-7d5f-18a5-68056 >>
> 6b07067:frag:1:1]
> >> > > ERROR o.a.d.e.w.fragment.FragmentExecutor - SYSTEM ERROR: >>
> IOException: >> > > Failed to shutdown streamer >> > > >> > > Fragment
> 1:1
> >> > > >> > > [Error Id: 594a2ba9-6e58-4602-861e-8333f4356752 on
> namenode:31010] >> > > org.apache.drill.common.exceptions.UserException:
> SYSTEM ERROR: >> > > IOException: Failed to shutdown streamer >> > >
> >> > > Fragment 1:1 >> > > >> > > [Error Id:
> 594a2ba9-6e58-4602-861e-8333f4356752
> on namenode:31010] >> > > at
> org.apache.drill.common.exceptions.UserException$
> >> > > Builder.build(UserException.java:543)
> >> > > ~[drill-common-1.9.0.jar:1.9.0] at
> >> > > org.apache.drill.exec.work.fragment.FragmentExecutor. >> > >
> sendFinalState(FragmentExecutor.java:293) >>
> [drill-java-exec-1.9.0.jar:1. >> > > 9.0] >> > > at
> org.apache.drill.exec.work.fragment.FragmentExecutor. >> > >
> cleanup(FragmentExecutor.java:160) [drill-java-exec-1.9.0.jar:1.9.0]
> >> >
> > at org.apache.drill.exec.work.fragment.FragmentExecutor.run( >> >
> FragmentExecutor.java:262) >> > > [drill-java-exec-1.9.0.jar:1.9.0] >>
> >
> > at org.apache.drill.common.SelfCleaningRunnable.run( >> >
> SelfCleaningRunnable.java:38) >> > > [drill-common-1.9.0.jar:1.9.0] >>
> > > at java.util.concurrent.ThreadPoolExecutor.runWorker( >> >
> ThreadPoolExecutor.java:1142) >> > > [na:1.8.0_74] >> > > at
> java.util.concurrent.ThreadPoolExecutor$Worker.run( >> >
> ThreadPoolExecutor.java:617) >> > > [na:1.8.0_74] >> > > at
> java.lang.Thread.run(Thread.java:745) [na:1.8.0_74] >> > > Caused by:
> java.io.IOException: Failed to shutdown streamer >> > > at
> org.apache.hadoop.hdfs.DFSOutputStream.closeThreads( >> >
> DFSOutputStream.java:2187) >> > > ~[hadoop-hdfs-2.7.1.jar:na] >> > >
> at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl( >> >
> DFSOutputStream.java:2235) >> > > ~[hadoop-hdfs-2.7.1.jar:na] >> > >
> at org.apache.hadoop.hdfs.DFSOutputStream.close( >> >
> DFSOutputStream.java:2204) >> > > ~[hadoop-hdfs-2.7.1.jar:na] >> > >
> at org.apache.hadoop.fs.FSDataOut >> putStream$PositionCache.close( >>
> >
> FSDataOutputStream.java:72) >> > > ~[hadoop-common-2.7.1.jar:na] >> >
> > at org.apache.hadoop.fs.FSDataOutputStream.close( >> >
> FSDataOutputStream.java:106) >> > > ~[hadoop-common-2.7.1.jar:na] >> >
> > at org.apache.drill.exec.store.easy.json.JsonRecordWriter. >> > >
> cleanup(JsonRecordWriter.java:246) ~[drill-java-exec-1.9.0.jar:1.9.0]
> >>
> > > at org.apache.drill.exec.physical.impl. >> > >
> WriterRecordBatch.closeWriter(WriterRecordBatch.java:180) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at org.apache.drill.exec.
physical.impl.
> >> > > WriterRecordBatch.innerNext(WriterRecordBatch.java:128) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at
> org.apache.drill.exec.record.AbstractRecordBatch.next(
> >> > AbstractRecordBatch.java:162) >> > >
> >> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.physical.impl.BaseRootExec. >> >
> next(BaseRootExec.java:104) >> > > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >>
> > > at org.apache.drill.exec.physical.impl.SingleSenderCreator$ >> > >
> SingleSenderRootExec.innerNext(SingleSenderCreator.java:92) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0] >> > > at org.apache.drill.exec.
physical.impl.BaseRootExec.
> >> > next(BaseRootExec.java:94) >> > >
> >> > ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.work.fragment.FragmentExecutor$1. >> >
> run(FragmentExecutor.java:232) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at org.apache.drill.exec.work.fragment.FragmentExecutor$1. >> >
> run(FragmentExecutor.java:226) >> > >
> ~[drill-java-exec-1.9.0.jar:1.9.0]
> >> > > at java.security.AccessController.doPrivileged(Native Method)
> >> > > >> >
> > ~[na:1.8.0_74] >> > > at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> >> > > ~[na:1.8.0_74] >> > > at
> >> > > org.apache.hadoop.security.UserGroupInformation.doAs(
> >> > > UserGroupInformation.java:1657) ~[hadoop-common-2.7.1.jar:na]
> >> > > >> >
> > at org.apache.drill.exec.work.fragment.FragmentExecutor.run( >> >
> FragmentExecutor.java:226) >> > > [drill-java-exec-1.9.0.jar:1.9.0] >>
> >
> > ... 4 common frames omitted >> > > >> > > >> > > Regards, >> > >
> > *Anup
> Tiwari* >> > > >> > >> > >
>
>



Regards,
Anup Tiwari