You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by lulynn_2008 <lu...@163.com> on 2014/03/21 07:34:27 UTC

what will gracefully shut down flume?

Hi All,


After flume agent is started at 1:10 and it shut itself down at 2:08. No errors, but graceful shutdown. This situation has happened several times.
My question is what will possibly gracefully shut down flume? Or which side of environment should I pay attention to to trace the error or find the root cause? 


Thanks

Re: what will gracefully shut down flume?

Posted by Mike Percy <mp...@apache.org>.
How are you starting Flume? What platform / environment are you running on? Did you write your own init scripts or are you using a vendor Hadoop distribution (i.e. Cloudera) or something else (i.e. directly using Bigtop)? On Linux, if you are writing your own init scripts then running Flume via nohup might help with the signal handling.

Mike

Sent from my iPhone

> On Mar 23, 2014, at 11:15 PM, lulynn_2008  <lu...@163.com> wrote:
> 
> Thanks. Do you know how to find the things which throw "a SIGHUP" or "a SIGTERM"?
> At the beginning, flume worked normally. After we made some actions, flume becomes unreliable. There must be something which made flume shut down, right?  Is it possible to do something to make flume work again? Like stop some kinds of services or kill some kinds of sessions?
> 
> 
> 
> At 2014-03-21 19:46:13,"Christopher Shannon" <cs...@gmail.com> wrote:
> I have also experienced this. A SIGHUP or a SIGTERM will gracefully shut it down. So look for anything in your system throwing those. Pretty much any other signal will kill it outright.
> 
>> On Friday, March 21, 2014, lulynn_2008 <lu...@163.com> wrote:
>> Hi All,
>> 
>> After flume agent is started at 1:10 and it shut itself down at 2:08. No errors, but graceful shutdown. This situation has happened several times.
>> My question is what will possibly gracefully shut down flume? Or which side of environment should I pay attention to to trace the error or find the root cause? 
>> 
>> Thanks
> 
> 

Re: what will gracefully shut down flume?

Posted by Christopher Shannon <cs...@gmail.com>.
kill -HUP <some-pid>

Is an example. Some vendor distros might actively look for flume agents to
shutdown as a part of their administrative process.

On Monday, March 24, 2014, lulynn_2008 <lu...@163.com> wrote:

> Thanks. Do you know how to find the things which throw "a SIGHUP" or "a
> SIGTERM"?
> At the beginning, flume worked normally. After we made some actions, flume
> becomes unreliable. There must be something which made flume shut down,
> right?  Is it possible to do something to make flume work again? Like stop
> some kinds of services or kill some kinds of sessions?
>
>
>
> At 2014-03-21 19:46:13,"Christopher Shannon" <cshannon108@gmail.com<javascript:_e(%7B%7D,'cvml','cshannon108@gmail.com');>>
> wrote:
>
> I have also experienced this. A SIGHUP or a SIGTERM will gracefully shut
> it down. So look for anything in your system throwing those. Pretty much
> any other signal will kill it outright.
>
> On Friday, March 21, 2014, lulynn_2008 <lulynn_2008@163.com<javascript:_e(%7B%7D,'cvml','lulynn_2008@163.com');>>
> wrote:
>
>> Hi All,
>>
>> After flume agent is started at 1:10 and it shut itself down at 2:08. No
>> errors, but graceful shutdown. This situation has happened several times.
>> My question is what will possibly gracefully shut down flume? Or which
>> side of environment should I pay attention to to trace the error or find
>> the root cause?
>>
>> Thanks
>>
>>
>>
>
>

Re:Re: what will gracefully shut down flume?

Posted by lulynn_2008 <lu...@163.com>.
Thanks. Do you know how to find the things which throw "a SIGHUP" or "a SIGTERM"?
At the beginning, flume worked normally. After we made some actions, flume becomes unreliable. There must be something which made flume shut down, right?  Is it possible to do something to make flume work again? Like stop some kinds of services or kill some kinds of sessions?




At 2014-03-21 19:46:13,"Christopher Shannon" <cs...@gmail.com> wrote:
I have also experienced this. A SIGHUP or a SIGTERM will gracefully shut it down. So look for anything in your system throwing those. Pretty much any other signal will kill it outright.

On Friday, March 21, 2014, lulynn_2008 <lu...@163.com> wrote:

Hi All,


After flume agent is started at 1:10 and it shut itself down at 2:08. No errors, but graceful shutdown. This situation has happened several times.
My question is what will possibly gracefully shut down flume? Or which side of environment should I pay attention to to trace the error or find the root cause? 


Thanks



Re: what will gracefully shut down flume?

Posted by Christopher Shannon <cs...@gmail.com>.
I have also experienced this. A SIGHUP or a SIGTERM will gracefully shut it
down. So look for anything in your system throwing those. Pretty much any
other signal will kill it outright.

On Friday, March 21, 2014, lulynn_2008 <lu...@163.com> wrote:

> Hi All,
>
> After flume agent is started at 1:10 and it shut itself down at 2:08. No
> errors, but graceful shutdown. This situation has happened several times.
> My question is what will possibly gracefully shut down flume? Or which
> side of environment should I pay attention to to trace the error or find
> the root cause?
>
> Thanks
>
>
>

Re:what will gracefully shut down flume?

Posted by lulynn_2008 <lu...@163.com>.
Before flume node stopping, no error. There are some exceptions after flume node stopping. Not sure whether the shutdown is related to these exceptions.

Here is the log:

20 Mar 2014 10:19:51,558 INFO  [hdfs-k13-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:215)  - Creating /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_13.1395328790791.bz2.tmp
20 Mar 2014 10:19:51,664 INFO  [pool-44-thread-1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x033d033d, /10.190.121.85:51090 => /10.90.121.22:41400] OPEN
20 Mar 2014 10:19:51,668 INFO  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x033d033d, /10.190.121.85:51090 => /10.90.121.22:41400] BOUND: /10.90.121.22:41400
20 Mar 2014 10:19:51,668 INFO  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x033d033d, /10.190.121.85:51090 => /10.90.121.22:41400] CONNECTED: /10.190.121.85:51090
20 Mar 2014 10:20:23,856 INFO  [node-shutdownHook] (org.apache.flume.node.FlumeNode.stop:67)  - Flume node stopping - tealeaf
20 Mar 2014 10:20:23,857 INFO  [node-shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.stop:78)  - Stopping lifecycle supervisor 31
20 Mar 2014 10:20:23,859 INFO  [node-shutdownHook] (org.apache.flume.conf.file.AbstractFileConfigurationProvider.stop:91)  - Configuration provider stopping
20 Mar 2014 10:20:23,861 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stop:215)  - Node manager stopping
20 Mar 2014 10:20:23,862 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllComponents:68)  - Shutting down configuration: { sourceRunners:{r1=EventDrivenSourceRunner: { source:Avro source r1: { bindAddress: 0.0.0.0, port: 41400 } }} sinkRunners:{k1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@34093409 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k2=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3dd73dd7 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3e463e46 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k4=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3eb53eb5 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k20=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3f243f24 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k5=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3f933f93 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k10=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@40024002 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k6=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@40714071 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k11=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@40e040e0 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k7=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@414f414f counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k12=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@41be41be counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k8=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@422d422d counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k13=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@429c429c counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k14=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@439d439d counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k9=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@432e432e counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k15=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@440c440c counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k16=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@447b447b counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k17=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@44ea44ea counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k18=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@45594559 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }, k19=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@45c845c8 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }} channels:{c1=org.apache.flume.channel.MemoryChannel{name: c1}} }
20 Mar 2014 10:20:23,862 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllComponents:72)  - Stopping Source r1
20 Mar 2014 10:20:23,863 INFO  [node-shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)  - Stopping component: EventDrivenSourceRunner: { source:Avro source r1: { bindAddress: 0.0.0.0, port: 41400 } }
20 Mar 2014 10:20:23,863 INFO  [node-shutdownHook] (org.apache.flume.source.AvroSource.stop:173)  - Avro source r1 stopping: Avro source r1: { bindAddress: 0.0.0.0, port: 41400 }
20 Mar 2014 10:20:23,867 INFO  [pool-45-thread-1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x52c752c7, /10.90.121.138:52631 :> /10.90.121.22:41400] DISCONNECTED
20 Mar 2014 10:20:23,867 INFO  [pool-45-thread-1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x52c752c7, /10.90.121.138:52631 :> /10.90.121.22:41400] UNBOUND
20 Mar 2014 10:20:23,868 INFO  [pool-45-thread-1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x52c752c7, /10.90.121.138:52631 :> /10.90.121.22:41400] CLOSED
20 Mar 2014 10:20:23,868 INFO  [pool-45-thread-1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed:209)  - Connection to /10.90.121.138:52631 disconnected.
20 Mar 2014 10:20:23,882 ERROR [pool-45-thread-2] (org.apache.flume.source.AvroSource.appendBatch:261)  - Avro source r1: Unable to process event batch. Exception follows.
org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: c1}
	at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)
	at org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:259)
	at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
	at java.lang.reflect.Method.invoke(Method.java:611)
	at org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:88)
	at org.apache.avro.ipc.Responder.respond(Responder.java:149)
	at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
	at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220)
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
	at java.lang.Thread.run(Thread.java:738)
Caused by: org.apache.flume.ChannelException: java.lang.InterruptedException
	at org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:96)
	at org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80)
	at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189)
	... 28 more
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1313)
	at java.util.concurrent.Semaphore.tryAcquire(Semaphore.java:568)
	at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doPut(MemoryChannel.java:80)
	at org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
	... 30 more
20 Mar 2014 10:20:23,886 WARN  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.exceptionCaught:201)  - Unexpected exception from downstream.
java.nio.channels.ClosedChannelException
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:673)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:400)
	at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:120)
	at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:59)
	at org.jboss.netty.channel.Channels.write(Channels.java:733)
	at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:65)
	at org.jboss.netty.channel.Channels.write(Channels.java:712)
	at org.jboss.netty.channel.Channels.write(Channels.java:679)
	at org.jboss.netty.channel.AbstractChannel.write(AbstractChannel.java:245)
	at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:192)
	at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303)
	at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364)
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:238)
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:38)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
	at java.lang.Thread.run(Thread.java:738)
20 Mar 2014 10:20:23,888 INFO  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x033d033d, /10.190.121.85:51090 :> /10.90.121.22:41400] DISCONNECTED
20 Mar 2014 10:20:23,888 INFO  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x033d033d, /10.190.121.85:51090 :> /10.90.121.22:41400] UNBOUND
20 Mar 2014 10:20:23,888 INFO  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171)  - [id: 0x033d033d, /10.190.121.85:51090 :> /10.90.121.22:41400] CLOSED
20 Mar 2014 10:20:23,888 INFO  [pool-45-thread-2] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.channelClosed:209)  - Connection to /10.190.121.85:51090 disconnected.
20 Mar 2014 10:20:23,888 INFO  [node-shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SOURCE, name: r1 stopped
20 Mar 2014 10:20:23,989 INFO  [node-shutdownHook] (org.apache.flume.source.AvroSource.stop:195)  - Avro source r1 stopped. Metrics: SOURCE:r1{src.append.accepted=0, src.events.accepted=94700, src.append-batch.accepted=947, src.open-connection.count=0, src.append.received=0, src.events.received=94800, src.append-batch.received=948}
20 Mar 2014 10:20:23,989 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllComponents:82)  - Stopping Sink k1
20 Mar 2014 10:20:23,990 INFO  [node-shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@34093409 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }
20 Mar 2014 10:20:23,990 WARN  [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout:372)  - Unexpected Exception null
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1313)
	at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:238)
	at java.util.concurrent.FutureTask.get(FutureTask.java:102)
	at org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java:345)
	at org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:741)
	at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:443)
	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
	at java.lang.Thread.run(Thread.java:738)
20 Mar 2014 10:20:23,992 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:460)  - process failed
java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1313)
	at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:238)
	at java.util.concurrent.FutureTask.get(FutureTask.java:102)
	at org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java:345)
	at org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:741)
	at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:443)
	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
	at java.lang.Thread.run(Thread.java:738)
20 Mar 2014 10:20:23,993 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.SinkRunner$PollingRunner.run:160)  - Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: java.lang.InterruptedException
	at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:464)
	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
	at java.lang.Thread.run(Thread.java:738)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1313)
	at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:238)
	at java.util.concurrent.FutureTask.get(FutureTask.java:102)
	at org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java:345)
	at org.apache.flume.sink.hdfs.HDFSEventSink.flush(HDFSEventSink.java:741)
	at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:443)
	... 3 more
20 Mar 2014 10:20:28,994 INFO  [node-shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:475)  - Closing /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_1
20 Mar 2014 10:20:28,999 INFO  [hdfs-k1-call-runner-8] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_1.1395328790791.bz2.tmp to /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_1.1395328790791.bz2
20 Mar 2014 10:20:29,015 INFO  [node-shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: k1 stopped
20 Mar 2014 10:20:29,015 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllComponents:82)  - Stopping Sink k2
20 Mar 2014 10:20:29,015 INFO  [lifecycleSupervisor-1-7] (org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run:215)  - Component has already been stopped SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@34093409 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0, runner.deliveryErrors=1} } }
20 Mar 2014 10:20:29,016 INFO  [node-shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3dd73dd7 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }
20 Mar 2014 10:20:29,017 INFO  [node-shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:475)  - Closing /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_2
20 Mar 2014 10:20:29,021 INFO  [hdfs-k2-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_2.1395328790791.bz2.tmp to /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_2.1395328790791.bz2
20 Mar 2014 10:20:29,024 INFO  [node-shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: k2 stopped
20 Mar 2014 10:20:29,025 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllComponents:82)  - Stopping Sink k3
20 Mar 2014 10:20:29,025 INFO  [node-shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3e463e46 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }
20 Mar 2014 10:20:29,025 INFO  [node-shutdownHook] (org.apache.flume.sink.hdfs.HDFSEventSink.stop:475)  - Closing /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_3
20 Mar 2014 10:20:29,029 INFO  [hdfs-k3-call-runner-1] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_3.1395328790792.bz2.tmp to /hive/raw/mx_s/Tealeaf/usaa_mx_tealeaf_request_staging_ext_std_v1_dat/partition_dt=2014-03-18/tealeaf_req_3.1395328790792.bz2
20 Mar 2014 10:20:29,032 INFO  [node-shutdownHook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: k3 stopped
20 Mar 2014 10:20:29,033 INFO  [node-shutdownHook] (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.stopAllComponents:82)  - Stopping Sink k4
20 Mar 2014 10:20:29,033 INFO  [node-shutdownHook] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:156)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@3eb53eb5 counterGroup:{ name:null counters:{runner.backoffs.consecutive=0} } }






At 2014-03-21 14:34:27,lulynn_2008 <lu...@163.com> wrote:

Hi All,


After flume agent is started at 1:10 and it shut itself down at 2:08. No errors, but graceful shutdown. This situation has happened several times.
My question is what will possibly gracefully shut down flume? Or which side of environment should I pay attention to to trace the error or find the root cause? 


Thanks