You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by George Blazer <gb...@gmail.com> on 2015/08/09 17:36:04 UTC

Unable to deliver event. Exception follows.

Hello there,

My flume seems to be getting stuck in the weird state and I'm not sure what
to do about it.

Metrics endpoint looks like Flume is relatively happy, but I'm seeing the
error below

curl localhost:5653/metrics
{"SINK.s3-sink":{"BatchCompleteCount":"0","ConnectionFailedCount":"0","EventDrainAttemptCount":"0","ConnectionCreatedCount":"0","Type":"SINK","BatchEmptyCount":"0","ConnectionClosedCount":"0","EventDrainSuccessCount":"0","StopTime":"0","StartTime":"1439133984193","BatchUnderflowCount":"0"},"SOURCE.spooling-directory":{"OpenConnectionCount":"0","Type":"SOURCE","AppendBatchAcceptedCount":"0","AppendBatchReceivedCount":"1","EventAcceptedCount":"0","AppendReceivedCount":"0"}

typekit@prod-polka-51145e82:/var/log/flume-ng$ ls -la
/mnt/flume/data/log-1955.meta
-rw-r--r-- 1 root root 47 Jul 26 15:57 /mnt/flume/data/log-1955.meta

The error I see is

2015-08-09 15:31:24,236 (SinkRunner-PollingRunner-DefaultSinkProcessor)
[ERROR -
org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable
to deliver event. Exception follows.
java.lang.IllegalStateException: Channel closed [channel=fileChannel]. Due
to java.io.FileNotFoundException: /mnt/flume/data/log-1955.meta (Too many
open files)
at
org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:340)
at
org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
at
org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.FileNotFoundException: /mnt/flume/data/log-1955.meta
(Too many open files)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:146)
at
org.apache.flume.channel.file.LogFileV3$SequentialReader.<init>(LogFileV3.java:318)
at
org.apache.flume.channel.file.LogFileFactory.getSequentialReader(LogFileFactory.java:165)
at
org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:264)
at org.apache.flume.channel.file.Log.doReplay(Log.java:519)
at org.apache.flume.channel.file.Log.replay(Log.java:445)
at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:290)
at
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

Is /metrics basically useless to look at the health of Flume?

Please help.

-George

Re: Unable to deliver event. Exception follows.

Posted by George Blazer <gb...@gmail.com>.
Thanks, will try

On Monday, August 10, 2015, Ashish <pa...@gmail.com> wrote:

> Looks like the either the file limit is set to too low or some file
> descriptor leakage. Assuming former, try standard *nix tools like
> ulimit and lsof to find open fd and if needed increase the ulimit to a
> suitable value.
>
> HTH !
>
> On Sun, Aug 9, 2015 at 8:36 AM, George Blazer <gblazer@gmail.com
> <javascript:;>> wrote:
> > Hello there,
> >
> > My flume seems to be getting stuck in the weird state and I'm not sure
> what
> > to do about it.
> >
> > Metrics endpoint looks like Flume is relatively happy, but I'm seeing the
> > error below
> >
> > curl localhost:5653/metrics
> >
> {"SINK.s3-sink":{"BatchCompleteCount":"0","ConnectionFailedCount":"0","EventDrainAttemptCount":"0","ConnectionCreatedCount":"0","Type":"SINK","BatchEmptyCount":"0","ConnectionClosedCount":"0","EventDrainSuccessCount":"0","StopTime":"0","StartTime":"1439133984193","BatchUnderflowCount":"0"},"SOURCE.spooling-directory":{"OpenConnectionCount":"0","Type":"SOURCE","AppendBatchAcceptedCount":"0","AppendBatchReceivedCount":"1","EventAcceptedCount":"0","AppendReceivedCount":"0"}
> >
> > typekit@prod-polka-51145e82:/var/log/flume-ng$ ls -la
> > /mnt/flume/data/log-1955.meta
> > -rw-r--r-- 1 root root 47 Jul 26 15:57 /mnt/flume/data/log-1955.meta
> >
> > The error I see is
> >
> > 2015-08-09 15:31:24,236 (SinkRunner-PollingRunner-DefaultSinkProcessor)
> > [ERROR -
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)]
> > Unable to deliver event. Exception follows.
> > java.lang.IllegalStateException: Channel closed [channel=fileChannel].
> Due
> > to java.io.FileNotFoundException: /mnt/flume/data/log-1955.meta (Too many
> > open files)
> > at
> >
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:340)
> > at
> >
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
> > at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
> > at
> >
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.FileNotFoundException: /mnt/flume/data/log-1955.meta
> (Too
> > many open files)
> > at java.io.FileInputStream.open(Native Method)
> > at java.io.FileInputStream.<init>(FileInputStream.java:146)
> > at
> >
> org.apache.flume.channel.file.LogFileV3$SequentialReader.<init>(LogFileV3.java:318)
> > at
> >
> org.apache.flume.channel.file.LogFileFactory.getSequentialReader(LogFileFactory.java:165)
> > at
> >
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:264)
> > at org.apache.flume.channel.file.Log.doReplay(Log.java:519)
> > at org.apache.flume.channel.file.Log.replay(Log.java:445)
> > at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:290)
> > at
> >
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> > at
> >
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >
> > Is /metrics basically useless to look at the health of Flume?
> >
> > Please help.
> >
> > -George
> >
> >
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>

Re: Unable to deliver event. Exception follows.

Posted by Ashish <pa...@gmail.com>.
Looks like the either the file limit is set to too low or some file
descriptor leakage. Assuming former, try standard *nix tools like
ulimit and lsof to find open fd and if needed increase the ulimit to a
suitable value.

HTH !

On Sun, Aug 9, 2015 at 8:36 AM, George Blazer <gb...@gmail.com> wrote:
> Hello there,
>
> My flume seems to be getting stuck in the weird state and I'm not sure what
> to do about it.
>
> Metrics endpoint looks like Flume is relatively happy, but I'm seeing the
> error below
>
> curl localhost:5653/metrics
> {"SINK.s3-sink":{"BatchCompleteCount":"0","ConnectionFailedCount":"0","EventDrainAttemptCount":"0","ConnectionCreatedCount":"0","Type":"SINK","BatchEmptyCount":"0","ConnectionClosedCount":"0","EventDrainSuccessCount":"0","StopTime":"0","StartTime":"1439133984193","BatchUnderflowCount":"0"},"SOURCE.spooling-directory":{"OpenConnectionCount":"0","Type":"SOURCE","AppendBatchAcceptedCount":"0","AppendBatchReceivedCount":"1","EventAcceptedCount":"0","AppendReceivedCount":"0"}
>
> typekit@prod-polka-51145e82:/var/log/flume-ng$ ls -la
> /mnt/flume/data/log-1955.meta
> -rw-r--r-- 1 root root 47 Jul 26 15:57 /mnt/flume/data/log-1955.meta
>
> The error I see is
>
> 2015-08-09 15:31:24,236 (SinkRunner-PollingRunner-DefaultSinkProcessor)
> [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)]
> Unable to deliver event. Exception follows.
> java.lang.IllegalStateException: Channel closed [channel=fileChannel]. Due
> to java.io.FileNotFoundException: /mnt/flume/data/log-1955.meta (Too many
> open files)
> at
> org.apache.flume.channel.file.FileChannel.createTransaction(FileChannel.java:340)
> at
> org.apache.flume.channel.BasicChannelSemantics.getTransaction(BasicChannelSemantics.java:122)
> at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.FileNotFoundException: /mnt/flume/data/log-1955.meta (Too
> many open files)
> at java.io.FileInputStream.open(Native Method)
> at java.io.FileInputStream.<init>(FileInputStream.java:146)
> at
> org.apache.flume.channel.file.LogFileV3$SequentialReader.<init>(LogFileV3.java:318)
> at
> org.apache.flume.channel.file.LogFileFactory.getSequentialReader(LogFileFactory.java:165)
> at
> org.apache.flume.channel.file.ReplayHandler.replayLog(ReplayHandler.java:264)
> at org.apache.flume.channel.file.Log.doReplay(Log.java:519)
> at org.apache.flume.channel.file.Log.replay(Log.java:445)
> at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:290)
> at
> org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> Is /metrics basically useless to look at the health of Flume?
>
> Please help.
>
> -George
>
>



-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal