You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by higkoohk <hi...@gmail.com> on 2013/05/15 06:07:58 UTC

Why does flume create one file per milliseconds to HDFS ?

Hello ,all !

   I'm a new flumer , today I use flume to collector web server logs.

   My flume config is:

tengine.sources = tengine

tengine.sources.tengine.type = exec

tengine.sources.tengine.command = tail -n +0 -F /data/log/tengine/access.log

tengine.sources.tengine.channels = file4log

tengine.sinks = hdfs4log

tengine.sinks.hdfs4log.type = hdfs

tengine.sinks.hdfs4log.channel = file4log

tengine.sinks.hdfs4log.serializer = avro_event

tengine.sinks.hdfs4log.hdfs.path = hdfs://hdfs.kisops.org:8020/flume/tengine

tengine.sinks.hdfs4log.hdfs.filePrefix = access

tengine.sinks.hdfs4log.hdfs.fileSuffix = .log

tengine.sinks.hdfs4log.hdfs.rollInterval = 3600

tengine.sinks.hdfs4log.hdfs.rollCouont = 3600

tengine.sinks.hdfs4log.hdfs.rollSize = 506870912

tengine.sinks.hdfs4log.hdfs.batchSize = 1048576

tengine.sinks.hdfs4log.hdfs.threadsPoolSize = 38

tengine.sinks.hdfs4log.hdfs.fileType = DataStream

tengine.sinks.hdfs4log.hdfs.writeFormat = Text

tengine.channels = file4log

tengine.channels.file4log.type = file

tengine.channels.file4log.capacity = 1048576

tengine.channels.file4log.transactionCapacity = 1048576

tengine.channels.file4log.checkpointDir = /data/log/hdfs

tengine.channels.file4log.dataDirs = /data/log/tengine

And it's log:

Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access

Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.6.1.jar from classpath

Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar from classpath

Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access

Info: Excluding
> /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/slf4j-api-1.6.1.jar
> from classpath

Info: Excluding
> /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/slf4j-api-1.6.1.jar
> from classpath

Info: Excluding
> /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar
> from classpath

Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.6.1.jar from classpath

Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar from classpath

+ exec /usr/java/default/bin/java -Xmx20m -cp
> '/usr/lib/flume-ng/lib/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.7.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.2.1.jar:/usr/lib/hadoop/.//bin:/usr/lib/hadoop/.//cloudera:/usr/lib/hadoop/.//etc:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.2.1-tests.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//lib:/usr/lib/hadoop/.//libexec:/usr/lib/hadoop/.//sbin:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.2.1.jar:/usr/lib/hadoop-hdfs/.//bin:/usr/lib/hadoop-hdfs/.//cloudera:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.2.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//lib:/usr/lib/hadoop-hdfs/.//sbin:/usr/lib/hadoop-hdfs/.//webapps:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/.//*:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../conf:/usr/java/default/lib/tools.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/..:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../hbase-0.94.2-cdh4.2.0-security.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../hbase-0.94.2-cdh4.2.0-security-tests.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../hbase.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/avro-1.7.3.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-daemon-1.0.3.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-io-2.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-lang-2.5.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-logging-1.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/core-3.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/gmbal-api-only-3.0.0-b023.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/grizzly-framework-2.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/grizzly-framework-2.1.1-tests.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/grizzly-http-2.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/grizzly-http-server-2.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/grizzly-http-servlet-2.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/grizzly-rcm-2.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/high-scale-lib-1.1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/httpclient-4.1.3.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/httpcore-4.1.3.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jamon-runtime-2.3.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/javax.servlet-3.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jaxb-api-2.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-client-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-core-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-grizzly2-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-guice-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-json-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-server-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-test-framework-core-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jersey-test-framework-grizzly2-1.8.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jets3t-0.6.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jetty-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jetty-util-6.1.26.cloudera.2.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jruby-complete-1.6.5.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jsp-2.1-6.1.14.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/jsr305-1.3.9.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/junit-4.10-HBASE-1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/kfs-0.3.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/libthrift-0.9.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/management-api-3.0.0-b012.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/metrics-core-2.1.2.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/netty-3.2.4.Final.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/protobuf-java-2.4.0a.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/stax-api-1.0.1.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/zookeeper.jar:/etc/hadoop/conf/core-site.xml:/etc/hadoop/conf/hadoop-env.sh:/etc/hadoop/conf/hdfs-site.xml:/etc/hadoop/conf/log4j.properties:/etc/hadoop/conf/mapred-site.xml:/etc/hadoop/conf/taskcontroller.cfg:/lib/alsa:/lib/cpp:/lib/crda:/lib/firmware:/lib/i686:/lib/kbd:/lib/ld-2.12.so:
> /lib/ld-linux.so.2:/lib/libanl-2.12.so:
> /lib/libanl.so.1:/lib/libBrokenLocale-2.12.so:
> /lib/libBrokenLocale.so.1:/lib/libc-2.12.so:/lib/libcidn-2.12.so:
> /lib/libcidn.so.1:/lib/libcrypt-2.12.so:
> /lib/libcrypt.so.1:/lib/libc.so.6:/lib/libdl-2.12.so:
> /lib/libdl.so.2:/lib/libfreebl3.chk:/lib/libfreebl3.so:/lib/libgcc_s-4.4.7-20120601.so.1:/lib/libgcc_s.so.1:/lib/libm-2.12.so:
> /lib/libm.so.6:/lib/libnsl-2.12.so:
> /lib/libnsl.so.1:/lib/libnss_compat-2.12.so:
> /lib/libnss_compat.so.2:/lib/libnss_dns-2.12.so:
> /lib/libnss_dns.so.2:/lib/libnss_files-2.12.so:
> /lib/libnss_files.so.2:/lib/libnss_hesiod-2.12.so:
> /lib/libnss_hesiod.so.2:/lib/libnss_nis-2.12.so:
> /lib/libnss_nisplus-2.12.so:
> /lib/libnss_nisplus.so.2:/lib/libnss_nis.so.2:/lib/libpthread-2.12.so:
> /lib/libpthread.so.0:/lib/libresolv-2.12.so:
> /lib/libresolv.so.2:/lib/librt-2.12.so:
> /lib/librt.so.1:/lib/libSegFault.so:/lib/libthread_db-1.0.so:
> /lib/libthread_db.so.1:/lib/libutil-2.12.so:/lib/libutil.so.1:/lib/lsb:/lib/modules:/lib/rtkaio:/lib/security:/lib/terminfo:/lib/udev:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/bin:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/cloudera:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/conf:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/zookeeper-3.4.5-cdh4.2.0.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/zookeeper.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/jline-0.9.94.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/log4j-1.2.15.jar:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/netty-3.2.2.Final.jar:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/asm-3.2.jar:/usr/lib/hadoop/lib/avro-1.7.3.jar:/usr/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/lib/commons-cli-1.2.jar:/usr/lib/hadoop/lib/commons-codec-1.4.jar:/usr/lib/hadoop/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/lib/commons-digester-1.8.jar:/usr/lib/hadoop/lib/commons-el-1.0.jar:/usr/lib/hadoop/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/lib/commons-io-2.1.jar:/usr/lib/hadoop/lib/commons-lang-2.5.jar:/usr/lib/hadoop/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop/lib/commons-math-2.1.jar:/usr/lib/hadoop/lib/commons-net-3.1.jar:/usr/lib/hadoop/lib/guava-11.0.2.jar:/usr/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/lib/jersey-core-1.8.jar:/usr/lib/hadoop/lib/jersey-json-1.8.jar:/usr/lib/hadoop/lib/jersey-server-1.8.jar:/usr/lib/hadoop/lib/jets3t-0.6.1.jar:/usr/lib/hadoop/lib/jettison-1.1.jar:/usr/lib/hadoop/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop/lib/jline-0.9.94.jar:/usr/lib/hadoop/lib/jsch-0.1.42.jar:/usr/lib/hadoop/lib/jsp-api-2.1.jar:/usr/lib/hadoop/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/lib/junit-4.8.2.jar:/usr/lib/hadoop/lib/kfs-0.3.jar:/usr/lib/hadoop/lib/log4j-1.2.17.jar:/usr/lib/hadoop/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/paranamer-2.3.jar:/usr/lib/hadoop/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop/lib/servlet-api-2.5.jar:/usr/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/lib/stax-api-1.0.1.jar:/usr/lib/hadoop/lib/xmlenc-0.52.jar:/usr/lib/hadoop/lib/zookeeper-3.4.5-cdh4.2.1.jar:/usr/lib/hadoop/.//bin:/usr/lib/hadoop/.//cloudera:/usr/lib/hadoop/.//etc:/usr/lib/hadoop/.//hadoop-annotations-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop/.//hadoop-annotations.jar:/usr/lib/hadoop/.//hadoop-auth-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop/.//hadoop-auth.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop/.//hadoop-common-2.0.0-cdh4.2.1-tests.jar:/usr/lib/hadoop/.//hadoop-common.jar:/usr/lib/hadoop/.//lib:/usr/lib/hadoop/.//libexec:/usr/lib/hadoop/.//sbin:/usr/lib/hadoop-hdfs/./:/usr/lib/hadoop-hdfs/lib/asm-3.2.jar:/usr/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop-hdfs/lib/commons-daemon-1.0.3.jar:/usr/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop-hdfs/lib/commons-io-2.1.jar:/usr/lib/hadoop-hdfs/lib/commons-lang-2.5.jar:/usr/lib/hadoop-hdfs/lib/commons-logging-1.1.1.jar:/usr/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop-hdfs/lib/jersey-core-1.8.jar:/usr/lib/hadoop-hdfs/lib/jersey-server-1.8.jar:/usr/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.2.jar:/usr/lib/hadoop-hdfs/lib/jline-0.9.94.jar:/usr/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop-hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop-hdfs/lib/protobuf-java-2.4.0a.jar:/usr/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop-hdfs/lib/zookeeper-3.4.5-cdh4.2.1.jar:/usr/lib/hadoop-hdfs/.//bin:/usr/lib/hadoop-hdfs/.//cloudera:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.2.1.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs-2.0.0-cdh4.2.1-tests.jar:/usr/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/lib/hadoop-hdfs/.//lib:/usr/lib/hadoop-hdfs/.//sbin:/usr/lib/hadoop-hdfs/.//webapps:/usr/lib/hadoop-yarn/.//*:/usr/lib/hadoop-0.20-mapreduce/.//*:/conf'
> -Djava.library.path=://usr/lib/hadoop/lib/native://usr/lib/hadoop/lib/native:/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/native/Linux-amd64-64
> org.apache.flume.node.Application -n tengine -f
> /etc/flume-ng/conf/flume.conf

13/05/15 11:27:52 INFO node.PollingPropertiesFileConfigurationProvider:
> Configuration provider starting

13/05/15 11:27:52 INFO node.PollingPropertiesFileConfigurationProvider:
> Reloading configuration file:/etc/flume-ng/conf/flume.conf

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Added sinks: hdfs4log
> Agent: tengine

13/05/15 11:27:52 INFO conf.FlumeConfiguration: Post-validation flume
> configuration contains configuration for agents: [tengine]

13/05/15 11:27:52 INFO node.AbstractConfigurationProvider: Creating channels

13/05/15 11:27:52 INFO channel.DefaultChannelFactory: Creating instance of
> channel file4log type file

13/05/15 11:27:52 INFO node.AbstractConfigurationProvider: Created channel
> file4log

13/05/15 11:27:52 INFO source.DefaultSourceFactory: Creating instance of
> source tengine, type exec

13/05/15 11:27:52 INFO sink.DefaultSinkFactory: Creating instance of sink:
> hdfs4log, type: hdfs

13/05/15 11:27:52 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false

13/05/15 11:27:52 INFO node.AbstractConfigurationProvider: Channel file4log
> connected to [tengine, hdfs4log]

13/05/15 11:27:52 INFO node.Application: Starting new configuration:{
> sourceRunners:{tengine=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource{name:tengine,state:IDLE} }}
> sinkRunners:{hdfs4log=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@d62a05c counterGroup:{
> name:null counters:{} } }} channels:{file4log=FileChannel file4log {
> dataDirs: [/data/log/tengine] }} }

13/05/15 11:27:52 INFO node.Application: Starting Channel file4log

13/05/15 11:27:52 INFO file.FileChannel: Starting FileChannel file4log {
> dataDirs: [/data/log/tengine] }...

13/05/15 11:27:52 INFO file.Log: Encryption is not enabled

13/05/15 11:27:52 INFO file.Log: Replay started

13/05/15 11:27:52 INFO file.Log: Found NextFileID 0, from []

13/05/15 11:27:53 INFO file.EventQueueBackingStoreFile: Preallocated
> /data/log/hdfs/checkpoint to 8396840 for capacity 1048576

13/05/15 11:27:53 INFO file.EventQueueBackingStoreFileV3: Starting up with
> /data/log/hdfs/checkpoint and /data/log/hdfs/checkpoint.meta

13/05/15 11:27:53 INFO file.Log: Last Checkpoint Wed May 15 11:27:52 CST
> 2013, queue depth = 0

13/05/15 11:27:53 INFO file.Log: Replaying logs with v2 replay logic

13/05/15 11:27:53 INFO file.ReplayHandler: Starting replay of []

13/05/15 11:27:53 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
> rollback: 0, commit: 0, skip: 0, eventCount:0

13/05/15 11:27:53 INFO file.Log: Rolling /data/log/tengine

13/05/15 11:27:53 INFO file.Log: Roll start /data/log/tengine

13/05/15 11:27:53 INFO tools.DirectMemoryUtils: Unable to get
> maxDirectMemory from VM: NoSuchMethodException:
> sun.misc.VM.maxDirectMemory(null)

13/05/15 11:27:53 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
>  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 18677760,
> Remaining = 18677760

13/05/15 11:27:53 INFO file.LogFile: Opened /data/log/tengine/log-1

13/05/15 11:27:53 INFO file.Log: Roll end

13/05/15 11:27:53 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /data/log/hdfs/checkpoint, elements to sync = 0

13/05/15 11:27:53 INFO file.EventQueueBackingStoreFile: Updating checkpoint
> metadata: logWriteOrderID: 1368588473111, queueSize: 0, queueHead: 0

13/05/15 11:27:53 INFO file.LogFileV3: Updating log-1.meta currentPosition
> = 0, logWriteOrderID = 1368588473111

13/05/15 11:27:53 INFO file.Log: Updated checkpoint for file:
> /data/log/tengine/log-1 position: 0 logWriteOrderID: 1368588473111

13/05/15 11:27:53 INFO file.FileChannel: Queue Size after replay: 0
> [channel=file4log]

13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: CHANNEL, name: file4log, registered successfully.

13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: file4log started

13/05/15 11:27:53 INFO node.Application: Starting Sink hdfs4log

13/05/15 11:27:53 INFO node.Application: Starting Source tengine

13/05/15 11:27:53 INFO source.ExecSource: Exec source starting with
> command:tail -n +0 -F /data/log/tengine/access.log

13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Monitoried
> counter group for type: SINK, name: hdfs4log, registered successfully.

13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: hdfs4log started

13/05/15 11:27:59 INFO hdfs.HDFSDataStream: Serializer = avro_event,
> UseRawLocalFileSystem = false

13/05/15 11:27:59 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log.tmp

13/05/15 11:28:00 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log.tmp

13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log.tmp

13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log.tmp

13/05/15 11:28:03 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log

13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log.tmp

13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log.tmp

13/05/15 11:28:03 INFO hdfs.BucketWriter: Renaming hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log.tmp to hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log

13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479414.log.tmp

13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
> hdfs.kisops.org:8020/flume/tengine/access.1368588479414.log.tmp

13/05/15 11:28:22 INFO file.EventQueueBackingStoreFile: Start checkpoint
> for /data/log/hdfs/checkpoint, elements to sync = 160

13/05/15 11:28:22 INFO file.EventQueueBackingStoreFile: Updating checkpoint
> metadata: logWriteOrderID: 1368588473441, queueSize: 0, queueHead: 158

13/05/15 11:28:22 INFO file.LogFileV3: Updating log-1.meta currentPosition
> = 27844, logWriteOrderID = 1368588473441

13/05/15 11:28:23 INFO file.Log: Updated checkpoint for file:
> /data/log/tengine/log-1 position: 27844 logWriteOrderID: 1368588473441

   I found that flume would create one file per milliseconds ?

Re: Why does flume create one file per milliseconds to HDFS ?

Posted by higkoohk <hi...@gmail.com>.
Maybe duplite 'o' when create new line in 'vi' .


2013/5/15 higkoohk <hi...@gmail.com>

> Sorry for all ,My config has a typo:
> tengine.sinks.hdfs4log.hdfs.rollCouont = 3600
>
> It should be:
> tengine.sinks.hdfs4log.hdfs.rollCount = 3600
>
> Due to this typo, your files are being rolled once 10 events are written.
>
> Many thanks @ Hari Shreedharan
>

Re: Why does flume create one file per milliseconds to HDFS ?

Posted by higkoohk <hi...@gmail.com>.
Sorry for all ,My config has a typo:
tengine.sinks.hdfs4log.hdfs.rollCouont = 3600

It should be:
tengine.sinks.hdfs4log.hdfs.rollCount = 3600

Due to this typo, your files are being rolled once 10 events are written.

Many thanks @ Hari Shreedharan


2013/5/15 higkoohk <hi...@gmail.com>

> Hello ,all !
>
>    I'm a new flumer , today I use flume to collector web server logs.
>
>    My flume config is:
>
> tengine.sources = tengine
>
> tengine.sources.tengine.type = exec
>
> tengine.sources.tengine.command = tail -n +0 -F
>> /data/log/tengine/access.log
>
> tengine.sources.tengine.channels = file4log
>
> tengine.sinks = hdfs4log
>
> tengine.sinks.hdfs4log.type = hdfs
>
> tengine.sinks.hdfs4log.channel = file4log
>
> tengine.sinks.hdfs4log.serializer = avro_event
>
> tengine.sinks.hdfs4log.hdfs.path = hdfs://
>> hdfs.kisops.org:8020/flume/tengine
>
> tengine.sinks.hdfs4log.hdfs.filePrefix = access
>
> tengine.sinks.hdfs4log.hdfs.fileSuffix = .log
>
> tengine.sinks.hdfs4log.hdfs.rollInterval = 3600
>
> tengine.sinks.hdfs4log.hdfs.rollCouont = 3600
>
> tengine.sinks.hdfs4log.hdfs.rollSize = 506870912
>
> tengine.sinks.hdfs4log.hdfs.batchSize = 1048576
>
> tengine.sinks.hdfs4log.hdfs.threadsPoolSize = 38
>
> tengine.sinks.hdfs4log.hdfs.fileType = DataStream
>
> tengine.sinks.hdfs4log.hdfs.writeFormat = Text
>
> tengine.channels = file4log
>
> tengine.channels.file4log.type = file
>
> tengine.channels.file4log.capacity = 1048576
>
> tengine.channels.file4log.transactionCapacity = 1048576
>
> tengine.channels.file4log.checkpointDir = /data/log/hdfs
>
> tengine.channels.file4log.dataDirs = /data/log/tengine
>
> And it's log:
>
> Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS
>> access
>
> Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.6.1.jar from classpath
>
> Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar from classpath
>
> Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
>
> Info: Excluding
>> /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/lib/hbase/bin/../lib/slf4j-api-1.6.1.jar
>> from classpath
>
> Info: Excluding
>> /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/slf4j-api-1.6.1.jar
>> from classpath
>
> Info: Excluding
>> /opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/bin/../lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar
>> from classpath
>
> Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.6.1.jar from classpath
>
> Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12-1.6.1.jar from classpath
>
> 13/05/15 11:27:52 INFO conf.FlumeConfiguration: Processing:hdfs4log
>
> 13/05/15 11:27:52 INFO conf.FlumeConfiguration: Added sinks: hdfs4log
>> Agent: tengine
>
> 13/05/15 11:27:52 INFO conf.FlumeConfiguration: Post-validation flume
>> configuration contains configuration for agents: [tengine]
>
> 13/05/15 11:27:52 INFO node.AbstractConfigurationProvider: Creating
>> channels
>
> 13/05/15 11:27:52 INFO channel.DefaultChannelFactory: Creating instance of
>> channel file4log type file
>
> 13/05/15 11:27:52 INFO node.AbstractConfigurationProvider: Created channel
>> file4log
>
> 13/05/15 11:27:52 INFO source.DefaultSourceFactory: Creating instance of
>> source tengine, type exec
>
> 13/05/15 11:27:52 INFO sink.DefaultSinkFactory: Creating instance of sink:
>> hdfs4log, type: hdfs
>
> 13/05/15 11:27:52 INFO hdfs.HDFSEventSink: Hadoop Security enabled: false
>
> 13/05/15 11:27:52 INFO node.AbstractConfigurationProvider: Channel
>> file4log connected to [tengine, hdfs4log]
>
> 13/05/15 11:27:52 INFO node.Application: Starting Channel file4log
>
> 13/05/15 11:27:52 INFO file.FileChannel: Starting FileChannel file4log {
>> dataDirs: [/data/log/tengine] }...
>
> 13/05/15 11:27:52 INFO file.Log: Encryption is not enabled
>
> 13/05/15 11:27:52 INFO file.Log: Replay started
>
> 13/05/15 11:27:52 INFO file.Log: Found NextFileID 0, from []
>
> 13/05/15 11:27:53 INFO file.EventQueueBackingStoreFile: Preallocated
>> /data/log/hdfs/checkpoint to 8396840 for capacity 1048576
>
> 13/05/15 11:27:53 INFO file.EventQueueBackingStoreFileV3: Starting up with
>> /data/log/hdfs/checkpoint and /data/log/hdfs/checkpoint.meta
>
> 13/05/15 11:27:53 INFO file.Log: Last Checkpoint Wed May 15 11:27:52 CST
>> 2013, queue depth = 0
>
> 13/05/15 11:27:53 INFO file.Log: Replaying logs with v2 replay logic
>
> 13/05/15 11:27:53 INFO file.ReplayHandler: Starting replay of []
>
> 13/05/15 11:27:53 INFO file.ReplayHandler: read: 0, put: 0, take: 0,
>> rollback: 0, commit: 0, skip: 0, eventCount:0
>
> 13/05/15 11:27:53 INFO file.Log: Rolling /data/log/tengine
>
> 13/05/15 11:27:53 INFO file.Log: Roll start /data/log/tengine
>
> 13/05/15 11:27:53 INFO tools.DirectMemoryUtils: Unable to get
>> maxDirectMemory from VM: NoSuchMethodException:
>> sun.misc.VM.maxDirectMemory(null)
>
> 13/05/15 11:27:53 INFO tools.DirectMemoryUtils: Direct Memory Allocation:
>>  Allocation = 1048576, Allocated = 0, MaxDirectMemorySize = 18677760,
>> Remaining = 18677760
>
> 13/05/15 11:27:53 INFO file.LogFile: Opened /data/log/tengine/log-1
>
> 13/05/15 11:27:53 INFO file.Log: Roll end
>
> 13/05/15 11:27:53 INFO file.EventQueueBackingStoreFile: Start checkpoint
>> for /data/log/hdfs/checkpoint, elements to sync = 0
>
> 13/05/15 11:27:53 INFO file.EventQueueBackingStoreFile: Updating
>> checkpoint metadata: logWriteOrderID: 1368588473111, queueSize: 0,
>> queueHead: 0
>
> 13/05/15 11:27:53 INFO file.LogFileV3: Updating log-1.meta currentPosition
>> = 0, logWriteOrderID = 1368588473111
>
> 13/05/15 11:27:53 INFO file.Log: Updated checkpoint for file:
>> /data/log/tengine/log-1 position: 0 logWriteOrderID: 1368588473111
>
> 13/05/15 11:27:53 INFO file.FileChannel: Queue Size after replay: 0
>> [channel=file4log]
>
> 13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Monitoried
>> counter group for type: CHANNEL, name: file4log, registered successfully.
>
> 13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Component
>> type: CHANNEL, name: file4log started
>
> 13/05/15 11:27:53 INFO node.Application: Starting Sink hdfs4log
>
> 13/05/15 11:27:53 INFO node.Application: Starting Source tengine
>
> 13/05/15 11:27:53 INFO source.ExecSource: Exec source starting with
>> command:tail -n +0 -F /data/log/tengine/access.log
>
> 13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Monitoried
>> counter group for type: SINK, name: hdfs4log, registered successfully.
>
> 13/05/15 11:27:53 INFO instrumentation.MonitoredCounterGroup: Component
>> type: SINK, name: hdfs4log started
>
> 13/05/15 11:27:59 INFO hdfs.HDFSDataStream: Serializer = avro_event,
>> UseRawLocalFileSystem = false
>
> 13/05/15 11:27:59 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log.tmp
>
> 13/05/15 11:28:00 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479399.log
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479400.log
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479401.log
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479402.log
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log.tmp
>
> 13/05/15 11:28:01 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479403.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479404.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479405.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479406.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479407.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479408.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479409.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479410.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479411.log
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log.tmp
>
> 13/05/15 11:28:02 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log.tmp
>
> 13/05/15 11:28:03 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479412.log
>
> 13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log.tmp
>
> 13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log.tmp
>
> 13/05/15 11:28:03 INFO hdfs.BucketWriter: Renaming hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log.tmp to
>> hdfs://hdfs.kisops.org:8020/flume/tengine/access.1368588479413.log
>
> 13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479414.log.tmp
>
> 13/05/15 11:28:03 INFO hdfs.BucketWriter: Creating hdfs://
>> hdfs.kisops.org:8020/flume/tengine/access.1368588479414.log.tmp
>
> 13/05/15 11:28:22 INFO file.EventQueueBackingStoreFile: Start checkpoint
>> for /data/log/hdfs/checkpoint, elements to sync = 160
>
> 13/05/15 11:28:22 INFO file.EventQueueBackingStoreFile: Updating
>> checkpoint metadata: logWriteOrderID: 1368588473441, queueSize: 0,
>> queueHead: 158
>
> 13/05/15 11:28:22 INFO file.LogFileV3: Updating log-1.meta currentPosition
>> = 27844, logWriteOrderID = 1368588473441
>
> 13/05/15 11:28:23 INFO file.Log: Updated checkpoint for file:
>> /data/log/tengine/log-1 position: 27844 logWriteOrderID: 1368588473441
>
>    I found that flume would create one file per milliseconds ?
>