You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Matt Wise <ma...@nextdoor.com> on 2013/05/08 21:17:09 UTC

Problem with 'reload' vs 'restart' of Flume?

We're seeing problems when we try to live-reload our Flume agents rather than restart them. They seem to maintain their incoming Syslog connections from the clients, but they stop sending out data to ElasticSearch (and probably the HDFS plugin as well). I see these errors during the reload, and I'm wondering if they're related. The process to reproduce for us is to make any change to the flume.conf file, and wait until Flume detects the file change. When this happens, everything basically breaks.

08 May 2013 19:07:32,413 ERROR [lifecycleSupervisor-1-6] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: CHANNEL, name: fc1
javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc1
	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
	at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:679)

08 May 2013 19:07:32,418 INFO  [lifecycleSupervisor-1-8] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.start:319)  - ElasticSearch sink {} started
08 May 2013 19:07:32,418 ERROR [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SINK, name: elasticsearch
javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=elasticsearch
	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
	at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:320)
	at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
	at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:679)
08 May 2013 19:07:32,426 INFO  [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SINK, name: elasticsearch started


Re: Problem with 'reload' vs 'restart' of Flume?

Posted by Hari Shreedharan <hs...@cloudera.com>.

-- 
Hari Shreedharan


On Wednesday, May 8, 2013 at 1:19 PM, Matt Wise wrote:

> Here's a dump of most of the log output for the reload process... We basically saw all log traffic stop once the reload happened. It did not resume until we did a full restart of the daemon:
> 
> > 08 May 2013 19:07:05,140 INFO  [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:133)  - Reloading configuration file:/etc/flume-ng/conf/flume.conf
> > 08 May 2013 19:07:05,141 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,141 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:930)  - Added sinks: s3 elasticsearch Agent: agent
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:140)  - Post-validation flume configuration contains configuration for agents: [agent]
> > 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:150)  - Creating channels
> > 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc1 type file
> > 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc1
> > 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc2 type file
> > 08 May 2013 19:07:05,153 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc2
> > 08 May 2013 19:07:05,153 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source netcat, type netcat
> > 08 May 2013 19:07:05,153 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source syslog, type syslogtcp
> > 08 May 2013 19:07:05,154 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source avro, type avro
> > 08 May 2013 19:07:05,154 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: s3, type: hdfs
> > 08 May 2013 19:07:05,154 INFO  [conf-file-poller-0] (org.apache.flume.sink.hdfs.HDFSEventSink.authenticate:528)  - Hadoop Security enabled: false
> > 08 May 2013 19:07:05,155 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: elasticsearch, type: org.apache.flume.sink.elasticsearch.ElasticSearchSink
> > 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc1 connected to [netcat, syslog, avro, s3]
> > 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc2 connected to [netcat, syslog, avro, elasticsearch]
> > 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:101)  - Shutting down configuration: { sourceRunners:{netcat=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:START} }, syslog=E
> > ventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:START} }, avro=EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }} sinkRunners:{s3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkPro
> > cessor@329c393d counterGroup:{ name:null counters:{runner.backoffs.consecutive=86, runner.backoffs=86} } }, elasticsearch=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@2e71edc0 counterGroup:{ name:null counters:{runner.backoffs.consecutive=86, runner.back
> > offs=86} } }} channels:{fc1=FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }, fc2=FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }} }
> > 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:105)  - Stopping Source netcat
> > 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:START} }
> > 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.source.NetcatSource.stop:190)  - Source stopping
> > 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:105)  - Stopping Source syslog
> > 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:START} }
> > 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.source.SyslogTcpSource.stop:123)  - Syslog TCP Source stopping...
> > 08 May 2013 19:07:05,166 INFO  [conf-file-poller-0] (org.apache.flume.source.SyslogTcpSource.stop:124)  - Metrics:{ name:null counters:{} }
> > 08 May 2013 19:07:05,166 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:105)  - Stopping Source avro
> > 08 May 2013 19:07:05,166 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }
> > 08 May 2013 19:07:05,167 INFO  [conf-file-poller-0] (org.apache.flume.source.AvroSource.stop:214)  - Avro source avro stopping: Avro source avro: { bindAddress: localhost, port: 4000 }
> > 08 May 2013 19:07:05,167 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SOURCE, name: avro stopped
> > 08 May 2013 19:07:05,168 INFO  [conf-file-poller-0] (org.apache.flume.source.AvroSource.stop:236)  - Avro source avro stopped. Metrics: SOURCE:avro{src.events.accepted=0, src.events.received=0, src.append.accepted=0, src.append-batch.accepted=0, src.open-connection.count=0
> > , src.append-batch.received=0, src.append.received=0}
> > 08 May 2013 19:07:05,168 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:115)  - Stopping Sink s3
> > 08 May 2013 19:07:05,168 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@329c393d counterGroup:{ name:null counters:{runner.backoffs.consecuti
> > ve=86, runner.backoffs=86} } }
> > 08 May 2013 19:07:06,795 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: s3 stopped
> > 08 May 2013 19:07:06,795 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:115)  - Stopping Sink elasticsearch
> > 08 May 2013 19:07:06,795 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@2e71edc0 counterGroup:{ name:null counters:{runner.backoffs.consecuti
> > ve=87, runner.backoffs=87} } }
> > 08 May 2013 19:07:06,796 INFO  [conf-file-poller-0] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.stop:333)  - ElasticSearch sink {} stopping
> > 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: elasticsearch stopped
> > 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:125)  - Stopping Channel fc1
> > 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }
> > 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> > 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> > 08 May 2013 19:07:06,822 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc1/data/log-3
> > 08 May 2013 19:07:06,822 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-2
> > 08 May 2013 19:07:06,827 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-3
> > 08 May 2013 19:07:06,833 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc1 stopped
> > 08 May 2013 19:07:06,833 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:125)  - Stopping Channel fc2
> > 08 May 2013 19:07:06,833 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }
> > 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> > 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> > 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc2/data/log-3
> > 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-2
> > 08 May 2013 19:07:06,840 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-3
> > 08 May 2013 19:07:06,845 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc2 stopped
> > 08 May 2013 19:07:06,846 INFO  [conf-file-poller-0] (org.mortbay.log.Slf4jLog.info:67 (http://org.mortbay.log.Slf4jLog.info:67))  - Stopped SocketConnector@0.0.0.0 (mailto:SocketConnector@0.0.0.0):41414
> > 08 May 2013 19:07:06,846 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138)  - Starting new configuration:{ sourceRunners:{netcat=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:IDLE} }, syslog=Eve
> > ntDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:IDLE} }, avro=EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }} sinkRunners:{s3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProces
> > sor@63cd0037 counterGroup:{ name:null counters:{} } }, elasticsearch=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@27c94e11 counterGroup:{ name:null counters:{} } }} channels:{fc1=FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }, fc2=FileChannel fc2 { 
> > dataDirs: [/mnt/flume/fc2/data] }} }
> > 08 May 2013 19:07:06,847 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc1
> > 08 May 2013 19:07:06,847 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> > 08 May 2013 19:07:06,848 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> > 08 May 2013 19:07:06,848 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> > 08 May 2013 19:07:06,848 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 3, from [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3]
> > 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc1/checkpoint/checkpoint and /mnt/flume/fc1/checkpoint/checkpoint.meta
> > 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc1/checkpoint/checkpoint.meta
> > 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:00:34 UTC 2013, queue depth = 0
> > 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> > 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:223)  - Starting replay of [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3]
> > 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc1/data/log-2
> > 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 18767023
> > 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc1/data/log-3
> > 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 1225
> > 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 1225 in /mnt/flume/fc1/data/log-3
> > 08 May 2013 19:07:06,853 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc2
> > 08 May 2013 19:07:06,853 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> > 08 May 2013 19:07:06,854 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> > 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> > 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 3, from [/mnt/flume/fc2/data/log-2, /mnt/flume/fc2/data/log-3]
> > 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc2/checkpoint/checkpoint and /mnt/flume/fc2/checkpoint/checkpoint.meta
> > 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc2/checkpoint/checkpoint.meta
> > 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:00:34 UTC 2013, queue depth = 0
> > 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> > 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:223)  - Starting replay of [/mnt/flume/fc2/data/log-2, /mnt/flume/fc2/data/log-3]
> > 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc2/data/log-2
> > 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 18667662
> > 08 May 2013 19:07:06,861 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc2/data/log-3
> > 08 May 2013 19:07:06,861 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 77
> > 08 May 2013 19:07:06,861 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 77 in /mnt/flume/fc2/data/log-3
> > 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 18784321 in /mnt/flume/fc1/data/log-2
> > 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:323)  - read: 71, put: 0, take: 0, rollback: 0, commit: 0, skip: 71, eventCount:0
> > 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:404)  - Rolling /mnt/flume/fc1/data
> > 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.roll:823)  - Roll start /mnt/flume/fc1/data
> > 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$Writer.<init>:171)  - Opened /mnt/flume/fc1/data/log-4
> > 08 May 2013 19:07:06,874 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.roll:838)  - Roll end
> > 08 May 2013 19:07:06,874 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:108)  - Start checkpoint for /mnt/flume/fc1/checkpoint/checkpoint, elements to sync = 0
> > 08 May 2013 19:07:06,881 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 18692535 in /mnt/flume/fc2/data/log-2
> > 08 May 2013 19:07:06,881 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:323)  - read: 256, put: 0, take: 0, rollback: 0, commit: 0, skip: 256, eventCount:0
> > 08 May 2013 19:07:06,885 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:120)  - Updating checkpoint metadata: logWriteOrderID: 1368033285977, queueSize: 0, queueHead: 36283
> > 08 May 2013 19:07:06,888 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:404)  - Rolling /mnt/flume/fc2/data
> > 08 May 2013 19:07:06,888 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.roll:823)  - Roll start /mnt/flume/fc2/data
> > 08 May 2013 19:07:06,888 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$Writer.<init>:171)  - Opened /mnt/flume/fc2/data/log-4
> > 08 May 2013 19:07:06,890 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFileV3$MetaDataWriter.markCheckpoint:85)  - Updating log-4.meta currentPosition = 0, logWriteOrderID = 1368033285977
> > 08 May 2013 19:07:06,892 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.writeCheckpoint:898)  - Updated checkpoint for file: /mnt/flume/fc1/data/log-4 position: 0 logWriteOrderID: 1368033285977
> > 08 May 2013 19:07:06,892 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.FileChannel.start:312)  - Queue Size after replay: 0 [channel=fc1]
> > 08 May 2013 19:07:06,893 ERROR [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: CHANNEL, name: fc1
> > javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc1
> > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
> > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:679)
> > 08 May 2013 19:07:06,893 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: CHANNEL, name: fc1 started
> > 08 May 2013 19:07:06,893 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.roll:838)  - Roll end
> > 08 May 2013 19:07:06,894 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:108)  - Start checkpoint for /mnt/flume/fc2/checkpoint/checkpoint, elements to sync = 0
> > 08 May 2013 19:07:06,901 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:120)  - Updating checkpoint metadata: logWriteOrderID: 1368033285978, queueSize: 0, queueHead: 43655
> > 08 May 2013 19:07:06,904 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFileV3$MetaDataWriter.markCheckpoint:85)  - Updating log-4.meta currentPosition = 0, logWriteOrderID = 1368033285978
> > 08 May 2013 19:07:06,906 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.writeCheckpoint:898)  - Updated checkpoint for file: /mnt/flume/fc2/data/log-4 position: 0 logWriteOrderID: 1368033285978
> > 08 May 2013 19:07:06,906 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.FileChannel.start:312)  - Queue Size after replay: 0 [channel=fc2]
> > 08 May 2013 19:07:06,906 ERROR [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: CHANNEL, name: fc2
> > javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc2
> > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
> > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:679)
> > 08 May 2013 19:07:06,907 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: CHANNEL, name: fc2 started
> > 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink s3
> > 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink elasticsearch
> > 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source netcat
> > 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source syslog
> > 08 May 2013 19:07:06,908 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source avro
> > 08 May 2013 19:07:06,908 INFO  [conf-file-poller-0] (org.mortbay.log.Slf4jLog.info:67 (http://org.mortbay.log.Slf4jLog.info:67))  - jetty-6.1.26
> > 08 May 2013 19:07:06,908 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.start:319)  - ElasticSearch sink {} started
> > 08 May 2013 19:07:06,909 ERROR [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SINK, name: elasticsearch
> > javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=elasticsearch
> > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:320)
> > at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
> > at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
> > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:679)
> > 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SINK, name: elasticsearch started
> > 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.openConnection:345)  - Using ElasticSearch hostnames: [inet[ec2-50-16-33-164.compute-1.amazonaws.com/10.137.15.126:9300 (http://ec2-50-16-33-164.compute-1.amazonaws.com/10.137.15.126:9300)], inet[ec2-54-224-137-34.compute-1.amazon
> > aws.com/10.35.98.189:9300] (http://aws.com/10.35.98.189:9300]), inet[ec2-54-225-24-188.compute-1.amazonaws.com/10.240.47.143:9300 (http://ec2-54-225-24-188.compute-1.amazonaws.com/10.240.47.143:9300)], inet[ec2-54-242-252-107.compute-1.amazonaws.com/10.158.97.233:9300] (http://ec2-54-242-252-107.compute-1.amazonaws.com/10.158.97.233:9300])] 
> > 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-9] (org.apache.flume.source.AvroSource.start:156)  - Starting Avro source avro: { bindAddress: localhost, port: 4000 }...
> > 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.source.SyslogTcpSource.start:110)  - Syslog TCP Source starting...
> > 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-5] (org.apache.flume.source.NetcatSource.start:150)  - Source starting
> > 08 May 2013 19:07:06,914 INFO  [lifecycleSupervisor-1-5] (org.apache.flume.source.NetcatSource.start:164)  - Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:4001]
> > 08 May 2013 19:07:06,912 ERROR [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SINK, name: s3
> > javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=s3
> > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > at org.apache.flume.sink.hdfs.HDFSEventSink.start(HDFSEventSink.java:519)
> > at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
> > at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
> > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:679)
> > 08 May 2013 19:07:06,923 INFO  [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SINK, name: s3 started
> > 08 May 2013 19:07:06,920 INFO  [conf-file-poller-0] (org.mortbay.log.Slf4jLog.info:67 (http://org.mortbay.log.Slf4jLog.info:67))  - Started SocketConnector@0.0.0.0 (mailto:SocketConnector@0.0.0.0):41414
> > 08 May 2013 19:07:06,920 INFO  [lifecycleSupervisor-1-0] (org.elasticsearch.common.logging.log4j.Log4jESLogger.internalInfo:104)  - [Flex] loaded [], sites []
> > 08 May 2013 19:07:06,933 ERROR [lifecycleSupervisor-1-9] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SOURCE, name: avro
> > javax.management.InstanceAlreadyExistsException: org.apache.flume.source:type=avro
> > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > at org.apache.flume.source.AvroSource.start(AvroSource.java:169)
> > at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
> > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > at java.lang.Thread.run(Thread.java:679)
> > 08 May 2013 19:07:06,933 INFO  [lifecycleSupervisor-1-9] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SOURCE, name: avro started
> > 08 May 2013 19:07:06,937 INFO  [lifecycleSupervisor-1-9] (org.apache.flume.source.AvroSource.start:181)  - Avro source avro started.
> > 08 May 2013 19:11:23,075 WARN  [pool-13-thread-2] (org.apache.flume.source.SyslogUtils.buildEvent:214)  - Event created from Invalid Syslog data.
> > 
> 
> 
> .. at this point we restarted the agent entirely...
> 
At this point, it does look like your agent had started fine. I am not entirely sure why the log traffic stopped, but ti does look like the Syslog source was able to read the data. Where do you not see the data - both in HDFS and Elastic search? Can you look at hidden files in HDFS too?
> > 08 May 2013 19:12:09,890 INFO  [agent-shutdown-hook] (org.apache.flume.lifecycle.LifecycleSupervisor.stop:79)  - Stopping lifecycle supervisor 11
> > 08 May 2013 19:12:09,895 INFO  [agent-shutdown-hook] (org.apache.flume.source.AvroSource.stop:214)  - Avro source avro stopping: Avro source avro: { bindAddress: localhost, port: 4000 }
> > 08 May 2013 19:12:09,902 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SOURCE, name: avro stopped
> > 08 May 2013 19:12:09,902 INFO  [agent-shutdown-hook] (org.apache.flume.source.AvroSource.stop:236)  - Avro source avro stopped. Metrics: SOURCE:avro{src.events.accepted=0, src.events.received=0, src.append.accepted=0, src.append-batch.accepted=0, src.open-connection.count=
> > 0, src.append-batch.received=0, src.append.received=0}
> > 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.source.SyslogTcpSource.stop:123)  - Syslog TCP Source stopping...
> > 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.source.SyslogTcpSource.stop:124)  - Metrics:{ name:null counters:{} }
> > 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider.stop:83)  - Configuration provider stopping
> > 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.source.NetcatSource.stop:190)  - Source stopping
> > 08 May 2013 19:12:09,904 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> > 08 May 2013 19:12:09,905 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> > 08 May 2013 19:12:09,905 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc1/data/log-4
> > 08 May 2013 19:12:09,905 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-2
> > 08 May 2013 19:12:09,911 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-3
> > 08 May 2013 19:12:09,916 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-4
> > 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc1 stopped
> > 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> > 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> > 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc2/data/log-4
> > 08 May 2013 19:12:09,923 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-2
> > 08 May 2013 19:12:09,928 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-3
> > 08 May 2013 19:12:09,933 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-4
> > 08 May 2013 19:12:09,939 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc2 stopped
> > 08 May 2013 19:12:09,939 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: s3 stopped
> > 08 May 2013 19:12:09,939 INFO  [agent-shutdown-hook] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.stop:333)  - ElasticSearch sink {} stopping
> > 08 May 2013 19:12:09,964 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: elasticsearch stopped
> > 08 May 2013 19:12:09,964 INFO  [agent-shutdown-hook] (org.mortbay.log.Slf4jLog.info:67 (http://org.mortbay.log.Slf4jLog.info:67))  - Stopped SocketConnector@0.0.0.0 (mailto:SocketConnector@0.0.0.0):41414
> > 08 May 2013 19:12:13,630 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start:61)  - Configuration provider starting
> > 08 May 2013 19:12:13,646 INFO  [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:133)  - Reloading configuration file:/etc/flume-ng/conf/flume.conf
> > 08 May 2013 19:12:13,664 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,669 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,670 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,670 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,670 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,671 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,671 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:930)  - Added sinks: s3 elasticsearch Agent: agent
> > 08 May 2013 19:12:13,672 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,672 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,672 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,677 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,678 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,687 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,687 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,697 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,697 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,698 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,698 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,699 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,699 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,699 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> > 08 May 2013 19:12:13,700 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> > 08 May 2013 19:12:13,771 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:140)  - Post-validation flume configuration contains configuration for agents: [agent]
> > 08 May 2013 19:12:13,773 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:150)  - Creating channels
> > 08 May 2013 19:12:13,809 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc1 type file
> > 08 May 2013 19:12:13,826 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc1
> > 08 May 2013 19:12:13,826 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc2 type file
> > 08 May 2013 19:12:13,827 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc2
> > 08 May 2013 19:12:13,828 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source netcat, type netcat
> > 08 May 2013 19:12:13,905 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source syslog, type syslogtcp
> > 08 May 2013 19:12:13,942 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source avro, type avro
> > 08 May 2013 19:12:13,960 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: s3, type: hdfs
> > 08 May 2013 19:12:14,738 INFO  [conf-file-poller-0] (org.apache.flume.sink.hdfs.HDFSEventSink.authenticate:528)  - Hadoop Security enabled: false
> > 08 May 2013 19:12:14,742 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: elasticsearch, type: org.apache.flume.sink.elasticsearch.ElasticSearchSink
> > 08 May 2013 19:12:14,802 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc1 connected to [netcat, syslog, avro, s3]
> > 08 May 2013 19:12:14,805 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc2 connected to [netcat, syslog, avro, elasticsearch]
> > 08 May 2013 19:12:14,828 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138)  - Starting new configuration:{ sourceRunners:{netcat=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:IDLE} }, syslog=Eve
> > ntDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:IDLE} }, avro=EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }} sinkRunners:{s3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProces
> > sor@7eb5666 counterGroup:{ name:null counters:{} } }, elasticsearch=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6b754699 counterGroup:{ name:null counters:{} } }} channels:{fc1=FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }, fc2=FileChannel fc2 { d
> > ataDirs: [/mnt/flume/fc2/data] }} }
> > 08 May 2013 19:12:14,835 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc1
> > 08 May 2013 19:12:14,837 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> > 08 May 2013 19:12:14,853 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc2
> > 08 May 2013 19:12:14,854 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> > 08 May 2013 19:12:14,875 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> > 08 May 2013 19:12:14,876 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> > 08 May 2013 19:12:14,877 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> > 08 May 2013 19:12:14,883 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> > 08 May 2013 19:12:14,899 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 4, from [/mnt/flume/fc2/data/log-2, /mnt/flume/fc2/data/log-3, /mnt/flume/fc2/data/log-4]
> > 08 May 2013 19:12:14,905 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 4, from [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3, /mnt/flume/fc1/data/log-4]
> > 08 May 2013 19:12:14,922 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc1/checkpoint/checkpoint and /mnt/flume/fc1/checkpoint/checkpoint.meta
> > 08 May 2013 19:12:14,922 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc1/checkpoint/checkpoint.meta
> > 08 May 2013 19:12:14,922 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc2/checkpoint/checkpoint and /mnt/flume/fc2/checkpoint/checkpoint.meta
> > 08 May 2013 19:12:14,925 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc2/checkpoint/checkpoint.meta
> > 08 May 2013 19:12:14,990 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:07:06 UTC 2013, queue depth = 0
> > 08 May 2013 19:12:15,001 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:07:06 UTC 2013, queue depth = 0
> > 08 May 2013 19:12:15,002 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> > 08 May 2013 19:12:15,007 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:223)  - Starting replay of [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3, /mnt/flume/fc1/data/log-4]
> > 08 May 2013 19:12:15,012 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc1/data/log-2
> > 08 May 2013 19:12:15,011 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> > 
> 
> 
> 
> here's our flume agent config:
> 
> > agent.sources = avro netcat syslog
> > agent.sinks = s3 elasticsearch
> > agent.channels = fc1 fc2
> > 
> > agent.channels.fc1.type = file
> > agent.channels.fc1.checkpointDir = /mnt/flume/fc1/checkpoint
> > agent.channels.fc1.dataDirs = /mnt/flume/fc1/data
> > agent.channels.fc1.capacity = 1000000
> > agent.channels.fc1.transactionCapacity = 10000
> > 
> > agent.channels.fc2.type = file
> > agent.channels.fc2.checkpointDir = /mnt/flume/fc2/checkpoint
> > agent.channels.fc2.dataDirs = /mnt/flume/fc2/data
> > agent.channels.fc2.capacity = 1000000
> > agent.channels.fc2.transactionCapacity = 10000
> > 
> > agent.sources.avro.type = avro
> > agent.sources.avro.bind = localhost
> > agent.sources.avro.port = 4000
> > agent.sources.avro.channels = fc1 fc2
> > agent.sources.avro.interceptors = timestamp
> > agent.sources.avro.interceptors.timestamp.type = timestamp
> > 
> > agent.sources.netcat.type = netcat
> > agent.sources.netcat.bind = localhost
> > agent.sources.netcat.port = 4001
> > agent.sources.netcat.channels = fc1 fc2
> > agent.sources.netcat.interceptors = timestamp
> > agent.sources.netcat.interceptors.timestamp.type = timestamp
> > 
> > agent.sources.syslog.type = syslogtcp
> > agent.sources.syslog.host = localhost
> > agent.sources.syslog.port = 4002
> > agent.sources.syslog.eventSize = 65536
> > agent.sources.syslog.channels = fc1 fc2
> > agent.sources.syslog.interceptors = timestamp hostname
> > agent.sources.syslog.interceptors.timestamp.type = timestamp
> > agent.sources.syslog.interceptors.hostname.type = regex_extractor
> > agent.sources.syslog.interceptors.hostname.regex = ^([a-zA-Z]{3})  ([0-9]+) ([0-9]+:[0-9]+:[0-9]+.[0-9]+) ([^ ]+).*
> > agent.sources.syslog.interceptors.hostname.serializers = s1 s2 s3 s4
> > agent.sources.syslog.interceptors.hostname.serializers.s1.name (http://agent.sources.syslog.interceptors.hostname.serializers.s1.name) = raw_month
> > agent.sources.syslog.interceptors.hostname.serializers.s2.name (http://agent.sources.syslog.interceptors.hostname.serializers.s2.name) = raw_day
> > agent.sources.syslog.interceptors.hostname.serializers.s3.name (http://agent.sources.syslog.interceptors.hostname.serializers.s3.name) = raw_timestamp
> > agent.sources.syslog.interceptors.hostname.serializers.s4.name (http://agent.sources.syslog.interceptors.hostname.serializers.s4.name) = host
> > 
> > agent.sinks.s3.type = hdfs
> > agent.sinks.s3.channel = fc1
> > agent.sinks.s3.hdfs.path = s3n://XXX:XXX@XXX/flume/events/%y-%m-%d/%H
> > agent.sinks.s3.hdfs.rollInterval = 600
> > agent.sinks.s3.hdfs.rollSize = 0
> > agent.sinks.s3.hdfs.rollCount = 10000
> > agent.sinks.s3.hdfs.batchSize = 10000
> > agent.sinks.s3.hdfs.writeFormat = Text
> > agent.sinks.s3.hdfs.fileType = DataStream
> > agent.sinks.s3.hdfs.timeZone = UTC
> > agent.sinks.s3.hdfs.filePrefix = FlumeData.flume-agent-useast1-6
> > agent.sinks.s3.hdfs.fileSuffix = .avro
> > agent.sinks.s3.serializer = avro_event
> > 
> > agent.sinks.elasticsearch.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
> > agent.sinks.elasticsearch.hostNames = XXX.com (http://XXX.com):9300,YYY.com (http://YYY.com):9300,ZZZ.com (http://ZZZ.com):9300
> > agent.sinks.elasticsearch.indexName = flume
> > agent.sinks.elasticsearch.clusterName = flume-elasticsearch-production-useast1
> > agent.sinks.elasticsearch.batchSize = 100
> > agent.sinks.elasticsearch.ttl = 30
> > agent.sinks.elasticsearch.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchLogStashEventSerializer
> > agent.sinks.elasticsearch.channel = fc2
> 
> On May 8, 2013, at 12:42 PM, Hari Shreedharan <hshreedharan@cloudera.com (mailto:hshreedharan@cloudera.com)> wrote:
> > Hi Matt, 
> > 
> > This is quite fine. When reload happens Flume tries to re-register the components with JMX so that it can update the metrics. But since an instance of the same type existed before, this exception shows up. I don't think this causes an issue - though you should confirm that you are able to see the metrics fine. Even with these errors in the logs, the components should work fine. In the logs you can see that the sink has started. These exceptions do not cause any data loss or components not to function.
> > 
> > 
> > Hari 
> > 
> > -- 
> > Hari Shreedharan
> > 
> > 
> > On Wednesday, May 8, 2013 at 12:17 PM, Matt Wise wrote:
> > 
> > > We're seeing problems when we try to live-reload our Flume agents rather than restart them. They seem to maintain their incoming Syslog connections from the clients, but they stop sending out data to ElasticSearch (and probably the HDFS plugin as well). I see these errors during the reload, and I'm wondering if they're related. The process to reproduce for us is to make any change to the flume.conf file, and wait until Flume detects the file change. When this happens, everything basically breaks.
> > > 
> > > 08 May 2013 19:07:32,413 ERROR [lifecycleSupervisor-1-6] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92) - Failed to register monitored counter group for type: CHANNEL, name: fc1
> > > javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc1
> > > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > > at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
> > > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > at java.lang.Thread.run(Thread.java:679)
> > > 
> > > 08 May 2013 19:07:32,418 INFO [lifecycleSupervisor-1-8] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.start:319) - ElasticSearch sink {} started
> > > 08 May 2013 19:07:32,418 ERROR [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92) - Failed to register monitored counter group for type: SINK, name: elasticsearch
> > > javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=elasticsearch
> > > at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> > > at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> > > at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> > > at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> > > at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> > > at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:320)
> > > at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
> > > at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
> > > at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> > > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> > > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> > > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> > > at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> > > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > at java.lang.Thread.run(Thread.java:679)
> > > 08 May 2013 19:07:32,426 INFO [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73) - Component type: SINK, name: elasticsearch started
> > > 
> > 
> > 
> 


Re: Problem with 'reload' vs 'restart' of Flume?

Posted by Matt Wise <ma...@nextdoor.com>.
Here's a dump of most of the log output for the reload process... We basically saw all log traffic stop once the reload happened. It did not resume until we did a full restart of the daemon:

> 08 May 2013 19:07:05,140 INFO  [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:133)  - Reloading configuration file:/etc/flume-ng/conf/flume.conf
> 08 May 2013 19:07:05,141 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,141 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:930)  - Added sinks: s3 elasticsearch Agent: agent
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,142 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,143 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:07:05,144 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:140)  - Post-validation flume configuration contains configuration for agents: [agent]
> 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:150)  - Creating channels
> 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc1 type file
> 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc1
> 08 May 2013 19:07:05,152 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc2 type file
> 08 May 2013 19:07:05,153 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc2
> 08 May 2013 19:07:05,153 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source netcat, type netcat
> 08 May 2013 19:07:05,153 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source syslog, type syslogtcp
> 08 May 2013 19:07:05,154 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source avro, type avro
> 08 May 2013 19:07:05,154 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: s3, type: hdfs
> 08 May 2013 19:07:05,154 INFO  [conf-file-poller-0] (org.apache.flume.sink.hdfs.HDFSEventSink.authenticate:528)  - Hadoop Security enabled: false
> 08 May 2013 19:07:05,155 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: elasticsearch, type: org.apache.flume.sink.elasticsearch.ElasticSearchSink
> 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc1 connected to [netcat, syslog, avro, s3]
> 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc2 connected to [netcat, syslog, avro, elasticsearch]
> 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:101)  - Shutting down configuration: { sourceRunners:{netcat=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:START} }, syslog=E
> ventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:START} }, avro=EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }} sinkRunners:{s3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkPro
> cessor@329c393d counterGroup:{ name:null counters:{runner.backoffs.consecutive=86, runner.backoffs=86} } }, elasticsearch=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@2e71edc0 counterGroup:{ name:null counters:{runner.backoffs.consecutive=86, runner.back
> offs=86} } }} channels:{fc1=FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }, fc2=FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }} }
> 08 May 2013 19:07:05,164 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:105)  - Stopping Source netcat
> 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:START} }
> 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.source.NetcatSource.stop:190)  - Source stopping
> 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:105)  - Stopping Source syslog
> 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: EventDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:START} }
> 08 May 2013 19:07:05,165 INFO  [conf-file-poller-0] (org.apache.flume.source.SyslogTcpSource.stop:123)  - Syslog TCP Source stopping...
> 08 May 2013 19:07:05,166 INFO  [conf-file-poller-0] (org.apache.flume.source.SyslogTcpSource.stop:124)  - Metrics:{ name:null counters:{} }
> 08 May 2013 19:07:05,166 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:105)  - Stopping Source avro
> 08 May 2013 19:07:05,166 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }
> 08 May 2013 19:07:05,167 INFO  [conf-file-poller-0] (org.apache.flume.source.AvroSource.stop:214)  - Avro source avro stopping: Avro source avro: { bindAddress: localhost, port: 4000 }
> 08 May 2013 19:07:05,167 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SOURCE, name: avro stopped
> 08 May 2013 19:07:05,168 INFO  [conf-file-poller-0] (org.apache.flume.source.AvroSource.stop:236)  - Avro source avro stopped. Metrics: SOURCE:avro{src.events.accepted=0, src.events.received=0, src.append.accepted=0, src.append-batch.accepted=0, src.open-connection.count=0
> , src.append-batch.received=0, src.append.received=0}
> 08 May 2013 19:07:05,168 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:115)  - Stopping Sink s3
> 08 May 2013 19:07:05,168 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@329c393d counterGroup:{ name:null counters:{runner.backoffs.consecuti
> ve=86, runner.backoffs=86} } }
> 08 May 2013 19:07:06,795 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: s3 stopped
> 08 May 2013 19:07:06,795 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:115)  - Stopping Sink elasticsearch
> 08 May 2013 19:07:06,795 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@2e71edc0 counterGroup:{ name:null counters:{runner.backoffs.consecuti
> ve=87, runner.backoffs=87} } }
> 08 May 2013 19:07:06,796 INFO  [conf-file-poller-0] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.stop:333)  - ElasticSearch sink {} stopping
> 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: elasticsearch stopped
> 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:125)  - Stopping Channel fc1
> 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }
> 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> 08 May 2013 19:07:06,821 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> 08 May 2013 19:07:06,822 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc1/data/log-3
> 08 May 2013 19:07:06,822 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-2
> 08 May 2013 19:07:06,827 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-3
> 08 May 2013 19:07:06,833 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc1 stopped
> 08 May 2013 19:07:06,833 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.stopAllComponents:125)  - Stopping Channel fc2
> 08 May 2013 19:07:06,833 INFO  [conf-file-poller-0] (org.apache.flume.lifecycle.LifecycleSupervisor.unsupervise:171)  - Stopping component: FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }
> 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc2/data/log-3
> 08 May 2013 19:07:06,834 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-2
> 08 May 2013 19:07:06,840 INFO  [conf-file-poller-0] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-3
> 08 May 2013 19:07:06,845 INFO  [conf-file-poller-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc2 stopped
> 08 May 2013 19:07:06,846 INFO  [conf-file-poller-0] (org.mortbay.log.Slf4jLog.info:67)  - Stopped SocketConnector@0.0.0.0:41414
> 08 May 2013 19:07:06,846 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138)  - Starting new configuration:{ sourceRunners:{netcat=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:IDLE} }, syslog=Eve
> ntDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:IDLE} }, avro=EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }} sinkRunners:{s3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProces
> sor@63cd0037 counterGroup:{ name:null counters:{} } }, elasticsearch=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@27c94e11 counterGroup:{ name:null counters:{} } }} channels:{fc1=FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }, fc2=FileChannel fc2 { 
> dataDirs: [/mnt/flume/fc2/data] }} }
> 08 May 2013 19:07:06,847 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc1
> 08 May 2013 19:07:06,847 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> 08 May 2013 19:07:06,848 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> 08 May 2013 19:07:06,848 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> 08 May 2013 19:07:06,848 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 3, from [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3]
> 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc1/checkpoint/checkpoint and /mnt/flume/fc1/checkpoint/checkpoint.meta
> 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc1/checkpoint/checkpoint.meta
> 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:00:34 UTC 2013, queue depth = 0
> 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> 08 May 2013 19:07:06,849 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:223)  - Starting replay of [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3]
> 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc1/data/log-2
> 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 18767023
> 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc1/data/log-3
> 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 1225
> 08 May 2013 19:07:06,850 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 1225 in /mnt/flume/fc1/data/log-3
> 08 May 2013 19:07:06,853 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc2
> 08 May 2013 19:07:06,853 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> 08 May 2013 19:07:06,854 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 3, from [/mnt/flume/fc2/data/log-2, /mnt/flume/fc2/data/log-3]
> 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc2/checkpoint/checkpoint and /mnt/flume/fc2/checkpoint/checkpoint.meta
> 08 May 2013 19:07:06,859 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc2/checkpoint/checkpoint.meta
> 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:00:34 UTC 2013, queue depth = 0
> 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:223)  - Starting replay of [/mnt/flume/fc2/data/log-2, /mnt/flume/fc2/data/log-3]
> 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc2/data/log-2
> 08 May 2013 19:07:06,860 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 18667662
> 08 May 2013 19:07:06,861 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc2/data/log-3
> 08 May 2013 19:07:06,861 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.skipToLastCheckpointPosition:466)  - fast-forward to checkpoint position: 77
> 08 May 2013 19:07:06,861 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 77 in /mnt/flume/fc2/data/log-3
> 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 18784321 in /mnt/flume/fc1/data/log-2
> 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.ReplayHandler.replayLog:323)  - read: 71, put: 0, take: 0, rollback: 0, commit: 0, skip: 71, eventCount:0
> 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.replay:404)  - Rolling /mnt/flume/fc1/data
> 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.roll:823)  - Roll start /mnt/flume/fc1/data
> 08 May 2013 19:07:06,868 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFile$Writer.<init>:171)  - Opened /mnt/flume/fc1/data/log-4
> 08 May 2013 19:07:06,874 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.roll:838)  - Roll end
> 08 May 2013 19:07:06,874 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:108)  - Start checkpoint for /mnt/flume/fc1/checkpoint/checkpoint, elements to sync = 0
> 08 May 2013 19:07:06,881 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$SequentialReader.next:491)  - Encountered EOF at 18692535 in /mnt/flume/fc2/data/log-2
> 08 May 2013 19:07:06,881 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.ReplayHandler.replayLog:323)  - read: 256, put: 0, take: 0, rollback: 0, commit: 0, skip: 256, eventCount:0
> 08 May 2013 19:07:06,885 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:120)  - Updating checkpoint metadata: logWriteOrderID: 1368033285977, queueSize: 0, queueHead: 36283
> 08 May 2013 19:07:06,888 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:404)  - Rolling /mnt/flume/fc2/data
> 08 May 2013 19:07:06,888 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.roll:823)  - Roll start /mnt/flume/fc2/data
> 08 May 2013 19:07:06,888 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFile$Writer.<init>:171)  - Opened /mnt/flume/fc2/data/log-4
> 08 May 2013 19:07:06,890 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.LogFileV3$MetaDataWriter.markCheckpoint:85)  - Updating log-4.meta currentPosition = 0, logWriteOrderID = 1368033285977
> 08 May 2013 19:07:06,892 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.Log.writeCheckpoint:898)  - Updated checkpoint for file: /mnt/flume/fc1/data/log-4 position: 0 logWriteOrderID: 1368033285977
> 08 May 2013 19:07:06,892 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.channel.file.FileChannel.start:312)  - Queue Size after replay: 0 [channel=fc1]
> 08 May 2013 19:07:06,893 ERROR [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: CHANNEL, name: fc1
> javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc1
> 	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> 	at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
> 	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> 08 May 2013 19:07:06,893 INFO  [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: CHANNEL, name: fc1 started
> 08 May 2013 19:07:06,893 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.roll:838)  - Roll end
> 08 May 2013 19:07:06,894 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFile.beginCheckpoint:108)  - Start checkpoint for /mnt/flume/fc2/checkpoint/checkpoint, elements to sync = 0
> 08 May 2013 19:07:06,901 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFile.checkpoint:120)  - Updating checkpoint metadata: logWriteOrderID: 1368033285978, queueSize: 0, queueHead: 43655
> 08 May 2013 19:07:06,904 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.LogFileV3$MetaDataWriter.markCheckpoint:85)  - Updating log-4.meta currentPosition = 0, logWriteOrderID = 1368033285978
> 08 May 2013 19:07:06,906 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.writeCheckpoint:898)  - Updated checkpoint for file: /mnt/flume/fc2/data/log-4 position: 0 logWriteOrderID: 1368033285978
> 08 May 2013 19:07:06,906 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.FileChannel.start:312)  - Queue Size after replay: 0 [channel=fc2]
> 08 May 2013 19:07:06,906 ERROR [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: CHANNEL, name: fc2
> javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc2
> 	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> 	at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
> 	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> 08 May 2013 19:07:06,907 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: CHANNEL, name: fc2 started
> 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink s3
> 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173)  - Starting Sink elasticsearch
> 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source netcat
> 08 May 2013 19:07:06,907 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source syslog
> 08 May 2013 19:07:06,908 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184)  - Starting Source avro
> 08 May 2013 19:07:06,908 INFO  [conf-file-poller-0] (org.mortbay.log.Slf4jLog.info:67)  - jetty-6.1.26
> 08 May 2013 19:07:06,908 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.start:319)  - ElasticSearch sink {} started
> 08 May 2013 19:07:06,909 ERROR [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SINK, name: elasticsearch
> javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=elasticsearch
> 	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> 	at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:320)
> 	at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
> 	at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
> 	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SINK, name: elasticsearch started
> 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.openConnection:345)  - Using ElasticSearch hostnames: [inet[ec2-50-16-33-164.compute-1.amazonaws.com/10.137.15.126:9300], inet[ec2-54-224-137-34.compute-1.amazon
> aws.com/10.35.98.189:9300], inet[ec2-54-225-24-188.compute-1.amazonaws.com/10.240.47.143:9300], inet[ec2-54-242-252-107.compute-1.amazonaws.com/10.158.97.233:9300]] 
> 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-9] (org.apache.flume.source.AvroSource.start:156)  - Starting Avro source avro: { bindAddress: localhost, port: 4000 }...
> 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-2] (org.apache.flume.source.SyslogTcpSource.start:110)  - Syslog TCP Source starting...
> 08 May 2013 19:07:06,912 INFO  [lifecycleSupervisor-1-5] (org.apache.flume.source.NetcatSource.start:150)  - Source starting
> 08 May 2013 19:07:06,914 INFO  [lifecycleSupervisor-1-5] (org.apache.flume.source.NetcatSource.start:164)  - Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:4001]
> 08 May 2013 19:07:06,912 ERROR [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SINK, name: s3
> javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=s3
> 	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> 	at org.apache.flume.sink.hdfs.HDFSEventSink.start(HDFSEventSink.java:519)
> 	at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
> 	at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
> 	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> 08 May 2013 19:07:06,923 INFO  [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SINK, name: s3 started
> 08 May 2013 19:07:06,920 INFO  [conf-file-poller-0] (org.mortbay.log.Slf4jLog.info:67)  - Started SocketConnector@0.0.0.0:41414
> 08 May 2013 19:07:06,920 INFO  [lifecycleSupervisor-1-0] (org.elasticsearch.common.logging.log4j.Log4jESLogger.internalInfo:104)  - [Flex] loaded [], sites []
> 08 May 2013 19:07:06,933 ERROR [lifecycleSupervisor-1-9] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92)  - Failed to register monitored counter group for type: SOURCE, name: avro
> javax.management.InstanceAlreadyExistsException: org.apache.flume.source:type=avro
> 	at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> 	at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> 	at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> 	at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> 	at org.apache.flume.source.AvroSource.start(AvroSource.java:169)
> 	at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
> 	at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> 	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:679)
> 08 May 2013 19:07:06,933 INFO  [lifecycleSupervisor-1-9] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73)  - Component type: SOURCE, name: avro started
> 08 May 2013 19:07:06,937 INFO  [lifecycleSupervisor-1-9] (org.apache.flume.source.AvroSource.start:181)  - Avro source avro started.
> 08 May 2013 19:11:23,075 WARN  [pool-13-thread-2] (org.apache.flume.source.SyslogUtils.buildEvent:214)  - Event created from Invalid Syslog data.

.. at this point we restarted the agent entirely...

> 08 May 2013 19:12:09,890 INFO  [agent-shutdown-hook] (org.apache.flume.lifecycle.LifecycleSupervisor.stop:79)  - Stopping lifecycle supervisor 11
> 08 May 2013 19:12:09,895 INFO  [agent-shutdown-hook] (org.apache.flume.source.AvroSource.stop:214)  - Avro source avro stopping: Avro source avro: { bindAddress: localhost, port: 4000 }
> 08 May 2013 19:12:09,902 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SOURCE, name: avro stopped
> 08 May 2013 19:12:09,902 INFO  [agent-shutdown-hook] (org.apache.flume.source.AvroSource.stop:236)  - Avro source avro stopped. Metrics: SOURCE:avro{src.events.accepted=0, src.events.received=0, src.append.accepted=0, src.append-batch.accepted=0, src.open-connection.count=
> 0, src.append-batch.received=0, src.append.received=0}
> 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.source.SyslogTcpSource.stop:123)  - Syslog TCP Source stopping...
> 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.source.SyslogTcpSource.stop:124)  - Metrics:{ name:null counters:{} }
> 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider.stop:83)  - Configuration provider stopping
> 08 May 2013 19:12:09,903 INFO  [agent-shutdown-hook] (org.apache.flume.source.NetcatSource.stop:190)  - Source stopping
> 08 May 2013 19:12:09,904 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> 08 May 2013 19:12:09,905 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> 08 May 2013 19:12:09,905 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc1/data/log-4
> 08 May 2013 19:12:09,905 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-2
> 08 May 2013 19:12:09,911 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-3
> 08 May 2013 19:12:09,916 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc1/data/log-4
> 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc1 stopped
> 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.FileChannel.stop:332)  - Stopping FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.Log.shutdownWorker:722)  - Attempting to shutdown background worker.
> 08 May 2013 19:12:09,922 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$Writer.close:275)  - Closing /mnt/flume/fc2/data/log-4
> 08 May 2013 19:12:09,923 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-2
> 08 May 2013 19:12:09,928 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-3
> 08 May 2013 19:12:09,933 INFO  [agent-shutdown-hook] (org.apache.flume.channel.file.LogFile$RandomReader.close:356)  - Closing RandomReader /mnt/flume/fc2/data/log-4
> 08 May 2013 19:12:09,939 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: CHANNEL, name: fc2 stopped
> 08 May 2013 19:12:09,939 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: s3 stopped
> 08 May 2013 19:12:09,939 INFO  [agent-shutdown-hook] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.stop:333)  - ElasticSearch sink {} stopping
> 08 May 2013 19:12:09,964 INFO  [agent-shutdown-hook] (org.apache.flume.instrumentation.MonitoredCounterGroup.stop:100)  - Component type: SINK, name: elasticsearch stopped
> 08 May 2013 19:12:09,964 INFO  [agent-shutdown-hook] (org.mortbay.log.Slf4jLog.info:67)  - Stopped SocketConnector@0.0.0.0:41414
> 08 May 2013 19:12:13,630 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start:61)  - Configuration provider starting
> 08 May 2013 19:12:13,646 INFO  [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:133)  - Reloading configuration file:/etc/flume-ng/conf/flume.conf
> 08 May 2013 19:12:13,664 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,669 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,670 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,670 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,670 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,671 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,671 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:930)  - Added sinks: s3 elasticsearch Agent: agent
> 08 May 2013 19:12:13,672 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,672 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,672 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,677 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,678 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,687 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,687 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,697 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,697 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,698 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,698 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,699 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,699 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,699 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:s3
> 08 May 2013 19:12:13,700 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty:1016)  - Processing:elasticsearch
> 08 May 2013 19:12:13,771 INFO  [conf-file-poller-0] (org.apache.flume.conf.FlumeConfiguration.validateConfiguration:140)  - Post-validation flume configuration contains configuration for agents: [agent]
> 08 May 2013 19:12:13,773 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:150)  - Creating channels
> 08 May 2013 19:12:13,809 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc1 type file
> 08 May 2013 19:12:13,826 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc1
> 08 May 2013 19:12:13,826 INFO  [conf-file-poller-0] (org.apache.flume.channel.DefaultChannelFactory.create:40)  - Creating instance of channel fc2 type file
> 08 May 2013 19:12:13,827 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.loadChannels:205)  - Created channel fc2
> 08 May 2013 19:12:13,828 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source netcat, type netcat
> 08 May 2013 19:12:13,905 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source syslog, type syslogtcp
> 08 May 2013 19:12:13,942 INFO  [conf-file-poller-0] (org.apache.flume.source.DefaultSourceFactory.create:39)  - Creating instance of source avro, type avro
> 08 May 2013 19:12:13,960 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: s3, type: hdfs
> 08 May 2013 19:12:14,738 INFO  [conf-file-poller-0] (org.apache.flume.sink.hdfs.HDFSEventSink.authenticate:528)  - Hadoop Security enabled: false
> 08 May 2013 19:12:14,742 INFO  [conf-file-poller-0] (org.apache.flume.sink.DefaultSinkFactory.create:40)  - Creating instance of sink: elasticsearch, type: org.apache.flume.sink.elasticsearch.ElasticSearchSink
> 08 May 2013 19:12:14,802 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc1 connected to [netcat, syslog, avro, s3]
> 08 May 2013 19:12:14,805 INFO  [conf-file-poller-0] (org.apache.flume.node.AbstractConfigurationProvider.getConfiguration:119)  - Channel fc2 connected to [netcat, syslog, avro, elasticsearch]
> 08 May 2013 19:12:14,828 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:138)  - Starting new configuration:{ sourceRunners:{netcat=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcat,state:IDLE} }, syslog=Eve
> ntDrivenSourceRunner: { source:org.apache.flume.source.SyslogTcpSource{name:syslog,state:IDLE} }, avro=EventDrivenSourceRunner: { source:Avro source avro: { bindAddress: localhost, port: 4000 } }} sinkRunners:{s3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProces
> sor@7eb5666 counterGroup:{ name:null counters:{} } }, elasticsearch=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6b754699 counterGroup:{ name:null counters:{} } }} channels:{fc1=FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }, fc2=FileChannel fc2 { d
> ataDirs: [/mnt/flume/fc2/data] }} }
> 08 May 2013 19:12:14,835 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc1
> 08 May 2013 19:12:14,837 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc1 { dataDirs: [/mnt/flume/fc1/data] }...
> 08 May 2013 19:12:14,853 INFO  [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:145)  - Starting Channel fc2
> 08 May 2013 19:12:14,854 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.FileChannel.start:288)  - Starting FileChannel fc2 { dataDirs: [/mnt/flume/fc2/data] }...
> 08 May 2013 19:12:14,875 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> 08 May 2013 19:12:14,876 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.<init>:279)  - Encryption is not enabled
> 08 May 2013 19:12:14,877 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> 08 May 2013 19:12:14,883 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:324)  - Replay started
> 08 May 2013 19:12:14,899 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 4, from [/mnt/flume/fc2/data/log-2, /mnt/flume/fc2/data/log-3, /mnt/flume/fc2/data/log-4]
> 08 May 2013 19:12:14,905 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:336)  - Found NextFileID 4, from [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3, /mnt/flume/fc1/data/log-4]
> 08 May 2013 19:12:14,922 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc1/checkpoint/checkpoint and /mnt/flume/fc1/checkpoint/checkpoint.meta
> 08 May 2013 19:12:14,922 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc1/checkpoint/checkpoint.meta
> 08 May 2013 19:12:14,922 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:47)  - Starting up with /mnt/flume/fc2/checkpoint/checkpoint and /mnt/flume/fc2/checkpoint/checkpoint.meta
> 08 May 2013 19:12:14,925 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.EventQueueBackingStoreFileV3.<init>:51)  - Reading checkpoint metadata from /mnt/flume/fc2/checkpoint/checkpoint.meta
> 08 May 2013 19:12:14,990 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:07:06 UTC 2013, queue depth = 0
> 08 May 2013 19:12:15,001 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.replay:372)  - Last Checkpoint Wed May 08 19:07:06 UTC 2013, queue depth = 0
> 08 May 2013 19:12:15,002 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic
> 08 May 2013 19:12:15,007 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:223)  - Starting replay of [/mnt/flume/fc1/data/log-2, /mnt/flume/fc1/data/log-3, /mnt/flume/fc1/data/log-4]
> 08 May 2013 19:12:15,012 INFO  [lifecycleSupervisor-1-0] (org.apache.flume.channel.file.ReplayHandler.replayLog:236)  - Replaying /mnt/flume/fc1/data/log-2
> 08 May 2013 19:12:15,011 INFO  [lifecycleSupervisor-1-1] (org.apache.flume.channel.file.Log.doReplay:441)  - Replaying logs with v2 replay logic


here's our flume agent config:

> agent.sources = avro netcat syslog
> agent.sinks = s3 elasticsearch
> agent.channels = fc1 fc2
> 
> agent.channels.fc1.type = file
> agent.channels.fc1.checkpointDir = /mnt/flume/fc1/checkpoint
> agent.channels.fc1.dataDirs = /mnt/flume/fc1/data
> agent.channels.fc1.capacity = 1000000
> agent.channels.fc1.transactionCapacity = 10000
> 
> agent.channels.fc2.type = file
> agent.channels.fc2.checkpointDir = /mnt/flume/fc2/checkpoint
> agent.channels.fc2.dataDirs = /mnt/flume/fc2/data
> agent.channels.fc2.capacity = 1000000
> agent.channels.fc2.transactionCapacity = 10000
> 
> agent.sources.avro.type = avro
> agent.sources.avro.bind = localhost
> agent.sources.avro.port = 4000
> agent.sources.avro.channels = fc1 fc2
> agent.sources.avro.interceptors = timestamp
> agent.sources.avro.interceptors.timestamp.type = timestamp
> 
> agent.sources.netcat.type = netcat
> agent.sources.netcat.bind = localhost
> agent.sources.netcat.port = 4001
> agent.sources.netcat.channels = fc1 fc2
> agent.sources.netcat.interceptors = timestamp
> agent.sources.netcat.interceptors.timestamp.type = timestamp
> 
> agent.sources.syslog.type = syslogtcp
> agent.sources.syslog.host = localhost
> agent.sources.syslog.port = 4002
> agent.sources.syslog.eventSize = 65536
> agent.sources.syslog.channels = fc1 fc2
> agent.sources.syslog.interceptors = timestamp hostname
> agent.sources.syslog.interceptors.timestamp.type = timestamp
> agent.sources.syslog.interceptors.hostname.type = regex_extractor
> agent.sources.syslog.interceptors.hostname.regex = ^([a-zA-Z]{3})  ([0-9]+) ([0-9]+:[0-9]+:[0-9]+.[0-9]+) ([^ ]+).*
> agent.sources.syslog.interceptors.hostname.serializers = s1 s2 s3 s4
> agent.sources.syslog.interceptors.hostname.serializers.s1.name = raw_month
> agent.sources.syslog.interceptors.hostname.serializers.s2.name = raw_day
> agent.sources.syslog.interceptors.hostname.serializers.s3.name = raw_timestamp
> agent.sources.syslog.interceptors.hostname.serializers.s4.name = host
> 
> agent.sinks.s3.type = hdfs
> agent.sinks.s3.channel = fc1
> agent.sinks.s3.hdfs.path = s3n://XXX:XXX@XXX/flume/events/%y-%m-%d/%H
> agent.sinks.s3.hdfs.rollInterval = 600
> agent.sinks.s3.hdfs.rollSize = 0
> agent.sinks.s3.hdfs.rollCount = 10000
> agent.sinks.s3.hdfs.batchSize = 10000
> agent.sinks.s3.hdfs.writeFormat = Text
> agent.sinks.s3.hdfs.fileType = DataStream
> agent.sinks.s3.hdfs.timeZone = UTC
> agent.sinks.s3.hdfs.filePrefix = FlumeData.flume-agent-useast1-6
> agent.sinks.s3.hdfs.fileSuffix = .avro
> agent.sinks.s3.serializer = avro_event
> 
> agent.sinks.elasticsearch.type = org.apache.flume.sink.elasticsearch.ElasticSearchSink
> agent.sinks.elasticsearch.hostNames = XXX.com:9300,YYY.com:9300,ZZZ.com:9300
> agent.sinks.elasticsearch.indexName = flume
> agent.sinks.elasticsearch.clusterName = flume-elasticsearch-production-useast1
> agent.sinks.elasticsearch.batchSize = 100
> agent.sinks.elasticsearch.ttl = 30
> agent.sinks.elasticsearch.serializer = org.apache.flume.sink.elasticsearch.ElasticSearchLogStashEventSerializer
> agent.sinks.elasticsearch.channel = fc2

On May 8, 2013, at 12:42 PM, Hari Shreedharan <hs...@cloudera.com> wrote:

> Hi Matt,
> 
> This is quite fine. When reload happens Flume tries to re-register the components with JMX so that it can update the metrics. But since an instance of the same type existed before, this exception shows up. I don't think this causes an issue - though you should confirm that you are able to see the metrics fine. Even with these errors in the logs, the components should work fine. In the logs you can see that the sink has started. These exceptions do not cause any data loss or components not to function.
> 
> 
> Hari
> 
> -- 
> Hari Shreedharan
> 
> On Wednesday, May 8, 2013 at 12:17 PM, Matt Wise wrote:
> 
>> We're seeing problems when we try to live-reload our Flume agents rather than restart them. They seem to maintain their incoming Syslog connections from the clients, but they stop sending out data to ElasticSearch (and probably the HDFS plugin as well). I see these errors during the reload, and I'm wondering if they're related. The process to reproduce for us is to make any change to the flume.conf file, and wait until Flume detects the file change. When this happens, everything basically breaks.
>> 
>> 08 May 2013 19:07:32,413 ERROR [lifecycleSupervisor-1-6] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92) - Failed to register monitored counter group for type: CHANNEL, name: fc1
>> javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc1
>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
>> at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
>> at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
>> at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
>> at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
>> at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:679)
>> 
>> 08 May 2013 19:07:32,418 INFO [lifecycleSupervisor-1-8] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.start:319) - ElasticSearch sink {} started
>> 08 May 2013 19:07:32,418 ERROR [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92) - Failed to register monitored counter group for type: SINK, name: elasticsearch
>> javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=elasticsearch
>> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
>> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
>> at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
>> at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
>> at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
>> at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:320)
>> at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
>> at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
>> at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:679)
>> 08 May 2013 19:07:32,426 INFO [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73) - Component type: SINK, name: elasticsearch started
> 


Re: Problem with 'reload' vs 'restart' of Flume?

Posted by Hari Shreedharan <hs...@cloudera.com>.
Hi Matt, 

This is quite fine. When reload happens Flume tries to re-register the components with JMX so that it can update the metrics. But since an instance of the same type existed before, this exception shows up. I don't think this causes an issue - though you should confirm that you are able to see the metrics fine. Even with these errors in the logs, the components should work fine. In the logs you can see that the sink has started. These exceptions do not cause any data loss or components not to function.


Hari 

-- 
Hari Shreedharan


On Wednesday, May 8, 2013 at 12:17 PM, Matt Wise wrote:

> We're seeing problems when we try to live-reload our Flume agents rather than restart them. They seem to maintain their incoming Syslog connections from the clients, but they stop sending out data to ElasticSearch (and probably the HDFS plugin as well). I see these errors during the reload, and I'm wondering if they're related. The process to reproduce for us is to make any change to the flume.conf file, and wait until Flume detects the file change. When this happens, everything basically breaks.
> 
> 08 May 2013 19:07:32,413 ERROR [lifecycleSupervisor-1-6] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92) - Failed to register monitored counter group for type: CHANNEL, name: fc1
> javax.management.InstanceAlreadyExistsException: org.apache.flume.channel:type=fc1
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> at org.apache.flume.channel.file.FileChannel.start(FileChannel.java:323)
> at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:679)
> 
> 08 May 2013 19:07:32,418 INFO [lifecycleSupervisor-1-8] (org.apache.flume.sink.elasticsearch.ElasticSearchSink.start:319) - ElasticSearch sink {} started
> 08 May 2013 19:07:32,418 ERROR [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:92) - Failed to register monitored counter group for type: SINK, name: elasticsearch
> javax.management.InstanceAlreadyExistsException: org.apache.flume.sink:type=elasticsearch
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:467)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1520)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:986)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:938)
> at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:330)
> at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:517)
> at org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:87)
> at org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:67)
> at org.apache.flume.sink.elasticsearch.ElasticSearchSink.start(ElasticSearchSink.java:320)
> at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
> at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
> at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:351)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:178)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:165)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:267)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:679)
> 08 May 2013 19:07:32,426 INFO [lifecycleSupervisor-1-8] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73) - Component type: SINK, name: elasticsearch started
> 
>