You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "jack li (JIRA)" <ji...@apache.org> on 2012/07/18 15:06:33 UTC

[jira] [Created] (FLUME-1379) source's type is avro and sink's type is hdfs but sink don't to work

jack li created FLUME-1379:
------------------------------

             Summary: source's type  is avro and sink's type is hdfs but sink don't to work
                 Key: FLUME-1379
                 URL: https://issues.apache.org/jira/browse/FLUME-1379
             Project: Flume
          Issue Type: Bug
          Components: Sinks+Sources
    Affects Versions: v1.1.0, v1.3.0
         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
            Reporter: jack li
             Fix For: v1.3.0


hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = tail
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#webblog-agent sources config
weblog-agent.sources.tail.type = avro
weblog-agent.sources.tail.port = 41414
weblog-agent.sources.tail.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-01:8020/flume/web
weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 120
#channels config
weblog-agent.channels.jdbc-channel01.type = memory

3,flume.log

2012-07-18 21:03:18,731 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 21:03:18,734 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 21:03:18,738 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 21:03:18,738 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 21:03:18,738 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 21:03:18,740 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,784 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 21:03:18,785 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 21:03:18,857 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 21:03:18,857 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 21:03:18,872 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: tail, registered successfully.
2012-07-18 21:03:18,886 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 21:03:19,142 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
2012-07-18 21:03:19,144 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 21:03:19,146 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: { source:Avro source tail: { bindAddress: localhost, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@57a7ddcf counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@4dd36dfe} }
2012-07-18 21:03:19,146 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 21:03:19,147 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 21:03:19,147 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 21:03:19,648 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 21:03:19,649 INFO nodemanager.DefaultLogicalNodeManager: Starting Source tail
2012-07-18 21:03:19,650 INFO source.AvroSource: Starting Avro source tail: { bindAddress: localhost, port: 41414 }...
2012-07-18 21:03:19,656 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 21:03:20,096 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: tail started
2012-07-18 21:03:20,096 INFO source.AvroSource: Avro source tail started.

 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (FLUME-1379) avro source , hdfs sink, no event writed to hdfs

Posted by "jack li (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jack li updated FLUME-1379:
---------------------------

    Fix Version/s:     (was: v1.3.0)
      Description: 
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type to avro ,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

  was:
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

       Issue Type: Question  (was: Bug)
          Summary:  avro source , hdfs sink, no event writed to hdfs  (was: source's type  is avro and sink's type is hdfs but sink don't to work)
    
>  avro source , hdfs sink, no event writed to hdfs
> -------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Question
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type to avro ,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (FLUME-1379) source's type is avro and sink's type is hdfs but sink don't to work

Posted by "jack li (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jack li updated FLUME-1379:
---------------------------

    Description: 
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
#weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 43200
##### 3 hours#####
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

  was:
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
#weblog-agent.sources.s1.interceptors = ts host
#weblog-agent.sources.s1.interceptors.ts.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
#weblog-agent.sources.s1.interceptors.host.type = org.apache.flume.interceptor.HostInterceptor$Builder
#weblog-agent.sources.s1.interceptors.host.useIP = false
#weblog-agent.sources.s1.interceptors.host.preserveExisting = true
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#weblog-agent.sources.s1.restart = true
#weblog-agent.sources.s1.restartThrottle = 1000
#weblog-agent.sources.s1.command = tail -f /opt/apps/nginx/logs/access.log
#weblog-agent.sources.s1.selector.type = replicating
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
#weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest/%Y/%m/%d
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
#weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 43200
##### 3 hours#####
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

    
> source's type  is avro and sink's type is hdfs but sink don't to work
> ---------------------------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>             Fix For: v1.3.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
> #weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 43200
> ##### 3 hours#####
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (FLUME-1379) source's type is avro and sink's type is hdfs but sink don't to work

Posted by "jack li (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jack li updated FLUME-1379:
---------------------------

    Description: 
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = tail
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#webblog-agent sources config
weblog-agent.sources.tail.type = avro
weblog-agent.sources.tail.port = 41414
weblog-agent.sources.tail.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-01:8020/flume/web
weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 120
#channels config
weblog-agent.channels.jdbc-channel01.type = memory

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

  was:
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = tail
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#webblog-agent sources config
weblog-agent.sources.tail.type = avro
weblog-agent.sources.tail.port = 41414
weblog-agent.sources.tail.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-01:8020/flume/web
weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 120
#channels config
weblog-agent.channels.jdbc-channel01.type = memory

3,flume.log

2012-07-18 21:03:18,731 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 21:03:18,734 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 21:03:18,738 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 21:03:18,738 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 21:03:18,738 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 21:03:18,740 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,748 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 21:03:18,784 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 21:03:18,785 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 21:03:18,857 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 21:03:18,857 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 21:03:18,872 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: tail, registered successfully.
2012-07-18 21:03:18,886 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 21:03:19,142 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
2012-07-18 21:03:19,144 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 21:03:19,146 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: { source:Avro source tail: { bindAddress: localhost, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@57a7ddcf counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@4dd36dfe} }
2012-07-18 21:03:19,146 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 21:03:19,147 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 21:03:19,147 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 21:03:19,648 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 21:03:19,649 INFO nodemanager.DefaultLogicalNodeManager: Starting Source tail
2012-07-18 21:03:19,650 INFO source.AvroSource: Starting Avro source tail: { bindAddress: localhost, port: 41414 }...
2012-07-18 21:03:19,656 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 21:03:20,096 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: tail started
2012-07-18 21:03:20,096 INFO source.AvroSource: Avro source tail started.

 

    
> source's type  is avro and sink's type is hdfs but sink don't to work
> ---------------------------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>             Fix For: v1.3.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = tail
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #webblog-agent sources config
> weblog-agent.sources.tail.type = avro
> weblog-agent.sources.tail.port = 41414
> weblog-agent.sources.tail.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-01:8020/flume/web
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 120
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (FLUME-1379) source's type is avro and sink's type is hdfs but sink don't to work

Posted by "jack li (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jack li updated FLUME-1379:
---------------------------

    Description: 
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

  was:
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
#weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 43200
##### 3 hours#####
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

    
> source's type  is avro and sink's type is hdfs but sink don't to work
> ---------------------------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>             Fix For: v1.3.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (FLUME-1379) avro source , hdfs sink, no event writed to hdfs

Posted by "Denny Ye (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13418085#comment-13418085 ] 

Denny Ye commented on FLUME-1379:
---------------------------------

In my opinion, you should monitor the thread status with 'jstack' to confirm all the threads have valid activity. 
                
>  avro source , hdfs sink, no event writed to hdfs
> -------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Question
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type to avro ,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (FLUME-1379) source's type is avro and sink's type is hdfs but sink don't to work

Posted by "jack li (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jack li updated FLUME-1379:
---------------------------

    Description: 
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = s1
#weblog-agent.sources.s1.interceptors = ts host
#weblog-agent.sources.s1.interceptors.ts.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
#weblog-agent.sources.s1.interceptors.host.type = org.apache.flume.interceptor.HostInterceptor$Builder
#weblog-agent.sources.s1.interceptors.host.useIP = false
#weblog-agent.sources.s1.interceptors.host.preserveExisting = true
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#define the flow
#webblog-agent sources config
weblog-agent.sources.s1.channels = jdbc-channel01
weblog-agent.sources.s1.type = avro
weblog-agent.sources.s1.port = 41414
weblog-agent.sources.s1.bind = 0.0.0.0
#weblog-agent.sources.s1.restart = true
#weblog-agent.sources.s1.restartThrottle = 1000
#weblog-agent.sources.s1.command = tail -f /opt/apps/nginx/logs/access.log
#weblog-agent.sources.s1.selector.type = replicating
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
#weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest/%Y/%m/%d
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
#weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
#weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 43200
##### 3 hours#####
weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
#channels config
weblog-agent.channels.jdbc-channel01.type = memory
weblog-agent.channels.jdbc-channel01.capacity = 200000
weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

  was:
hello everyone!
1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
2, flume.conf 
#List sources, sinks and channels in the agent
weblog-agent.sources = tail
weblog-agent.sinks = avro-forward-sink01
weblog-agent.channels = jdbc-channel01
#webblog-agent sources config
weblog-agent.sources.tail.type = avro
weblog-agent.sources.tail.port = 41414
weblog-agent.sources.tail.bind = 0.0.0.0
#avro sink properties
weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
weblog-agent.sinks.avro-forward-sink01.type = hdfs
weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-01:8020/flume/web
weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 120
#channels config
weblog-agent.channels.jdbc-channel01.type = memory

3,flume.log

2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01

2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1

2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

    
> source's type  is avro and sink's type is hdfs but sink don't to work
> ---------------------------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>             Fix For: v1.3.0
>
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type is avro ,and no change sink's type,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> #weblog-agent.sources.s1.interceptors = ts host
> #weblog-agent.sources.s1.interceptors.ts.type = org.apache.flume.interceptor.TimestampInterceptor$Builder
> #weblog-agent.sources.s1.interceptors.host.type = org.apache.flume.interceptor.HostInterceptor$Builder
> #weblog-agent.sources.s1.interceptors.host.useIP = false
> #weblog-agent.sources.s1.interceptors.host.preserveExisting = true
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #weblog-agent.sources.s1.restart = true
> #weblog-agent.sources.s1.restartThrottle = 1000
> #weblog-agent.sources.s1.command = tail -f /opt/apps/nginx/logs/access.log
> #weblog-agent.sources.s1.selector.type = replicating
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> #weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest/%Y/%m/%d
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
> #weblog-agent.sinks.avro-forward-sink01.hdfs.rollInterval = 43200
> ##### 3 hours#####
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (FLUME-1379) avro source , hdfs sink, no event writed to hdfs

Posted by "jack li (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

jack li resolved FLUME-1379.
----------------------------

    Resolution: Fixed
    
>  avro source , hdfs sink, no event writed to hdfs
> -------------------------------------------------
>
>                 Key: FLUME-1379
>                 URL: https://issues.apache.org/jira/browse/FLUME-1379
>             Project: Flume
>          Issue Type: Question
>          Components: Sinks+Sources
>    Affects Versions: v1.1.0, v1.3.0
>         Environment: rhel5.7 ,jdk1.6,yum installed flume-ng 1.1.0+120-1.cdh4.0.0  ,mvn install 1.3.
>            Reporter: jack li
>              Labels: flume
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec  and sink's type is hdfs .the event can write to the hdfs.But when I change the source's type to avro ,no event write to hdfs.
> 2, flume.conf 
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path = hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal = flume@HADOOP.COM
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab = /var/run/flume-ng/flume.keytab
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting - weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider: Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider: Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks: avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of configuration for agent: weblog-agent, initial-configuration: AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink: avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro, bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000, capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream, hdfs.kerberosPrincipal=flume@HADOOP.COM, hdfs.txnEventMax=20000, hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab, hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest, hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs, channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration  for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider: Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider: created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups:  Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink, name:avro-forward-sink01 }: Attempting kerberos login as principal (flume@HADOOP.COM) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful for user flume@HADOOP.COM using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  User name: flume@HADOOP.COM
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink:  Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user flume@HADOOP.COM
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup: Monitoried counter group for type: SINK, name: avro-forward-sink01, registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro source s1: { bindAddress: 0.0.0.0, port: 41414 } }} sinkRunners:{avro-forward-sink01=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{ name:null counters:{} } }} channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: { bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider: Checking file:flume.conf for changes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira